Test Report: KVM_Linux_crio 20318

                    
                      dd22c410311484da6763aae43511cabe19037b94:2025-01-27:38092
                    
                

Test fail (12/312)

x
+
TestAddons/parallel/Ingress (162.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-010792 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-010792 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-010792 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bdca297a-a0ac-4017-9eda-1326d1b0a09d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bdca297a-a0ac-4017-9eda-1326d1b0a09d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.003616485s
I0127 11:27:39.175543 1731396 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-010792 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.810779851s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-010792 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.45
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-010792 -n addons-010792
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 logs -n 25: (1.15274852s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-006941                                                                     | download-only-006941 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| delete  | -p download-only-929502                                                                     | download-only-929502 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| delete  | -p download-only-006941                                                                     | download-only-006941 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-922671 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	|         | binary-mirror-922671                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45489                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-922671                                                                     | binary-mirror-922671 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	|         | addons-010792                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	|         | addons-010792                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-010792 --wait=true                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:26 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:26 UTC | 27 Jan 25 11:26 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:26 UTC | 27 Jan 25 11:26 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:26 UTC | 27 Jan 25 11:26 UTC |
	|         | -p addons-010792                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-010792 ssh cat                                                                       | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | /opt/local-path-provisioner/pvc-4b8022cd-4161-4f38-be88-efcf1f11f636_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-010792 ip                                                                            | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	| addons  | addons-010792 addons disable                                                                | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-010792 ssh curl -s                                                                   | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-010792 addons                                                                        | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-010792 ip                                                                            | addons-010792        | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:24:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:24:22.174579 1732084 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:24:22.174722 1732084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:24:22.174732 1732084 out.go:358] Setting ErrFile to fd 2...
	I0127 11:24:22.174739 1732084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:24:22.174927 1732084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:24:22.175507 1732084 out.go:352] Setting JSON to false
	I0127 11:24:22.176422 1732084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":29203,"bootTime":1737947859,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:24:22.176520 1732084 start.go:139] virtualization: kvm guest
	I0127 11:24:22.178504 1732084 out.go:177] * [addons-010792] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:24:22.179707 1732084 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:24:22.179717 1732084 notify.go:220] Checking for updates...
	I0127 11:24:22.182005 1732084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:24:22.183096 1732084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:24:22.184231 1732084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:24:22.185335 1732084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:24:22.186395 1732084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:24:22.187586 1732084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:24:22.219132 1732084 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:24:22.220215 1732084 start.go:297] selected driver: kvm2
	I0127 11:24:22.220229 1732084 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:24:22.220244 1732084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:24:22.220939 1732084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:24:22.221058 1732084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:24:22.235475 1732084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:24:22.235527 1732084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:24:22.235776 1732084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:24:22.235813 1732084 cni.go:84] Creating CNI manager for ""
	I0127 11:24:22.235867 1732084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:24:22.235878 1732084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:24:22.235938 1732084 start.go:340] cluster config:
	{Name:addons-010792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-010792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 11:24:22.236061 1732084 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:24:22.237630 1732084 out.go:177] * Starting "addons-010792" primary control-plane node in "addons-010792" cluster
	I0127 11:24:22.238832 1732084 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:24:22.238861 1732084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:24:22.238868 1732084 cache.go:56] Caching tarball of preloaded images
	I0127 11:24:22.238946 1732084 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:24:22.238956 1732084 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:24:22.239256 1732084 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/config.json ...
	I0127 11:24:22.239277 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/config.json: {Name:mkc5ca2ef156e4ff5d3456a4daffa941c1527063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:22.239409 1732084 start.go:360] acquireMachinesLock for addons-010792: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:24:22.239454 1732084 start.go:364] duration metric: took 31.442µs to acquireMachinesLock for "addons-010792"
	I0127 11:24:22.239472 1732084 start.go:93] Provisioning new machine with config: &{Name:addons-010792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-010792 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:24:22.239524 1732084 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 11:24:22.240877 1732084 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 11:24:22.241021 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:24:22.241068 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:24:22.255233 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0127 11:24:22.255685 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:24:22.256261 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:24:22.256282 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:24:22.256679 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:24:22.256920 1732084 main.go:141] libmachine: (addons-010792) Calling .GetMachineName
	I0127 11:24:22.257045 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:22.257182 1732084 start.go:159] libmachine.API.Create for "addons-010792" (driver="kvm2")
	I0127 11:24:22.257220 1732084 client.go:168] LocalClient.Create starting
	I0127 11:24:22.257266 1732084 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 11:24:22.356704 1732084 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 11:24:22.490641 1732084 main.go:141] libmachine: Running pre-create checks...
	I0127 11:24:22.490669 1732084 main.go:141] libmachine: (addons-010792) Calling .PreCreateCheck
	I0127 11:24:22.491205 1732084 main.go:141] libmachine: (addons-010792) Calling .GetConfigRaw
	I0127 11:24:22.491727 1732084 main.go:141] libmachine: Creating machine...
	I0127 11:24:22.491744 1732084 main.go:141] libmachine: (addons-010792) Calling .Create
	I0127 11:24:22.491898 1732084 main.go:141] libmachine: (addons-010792) creating KVM machine...
	I0127 11:24:22.491923 1732084 main.go:141] libmachine: (addons-010792) creating network...
	I0127 11:24:22.493053 1732084 main.go:141] libmachine: (addons-010792) DBG | found existing default KVM network
	I0127 11:24:22.493836 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:22.493669 1732106 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a40}
	I0127 11:24:22.493861 1732084 main.go:141] libmachine: (addons-010792) DBG | created network xml: 
	I0127 11:24:22.493877 1732084 main.go:141] libmachine: (addons-010792) DBG | <network>
	I0127 11:24:22.493886 1732084 main.go:141] libmachine: (addons-010792) DBG |   <name>mk-addons-010792</name>
	I0127 11:24:22.493899 1732084 main.go:141] libmachine: (addons-010792) DBG |   <dns enable='no'/>
	I0127 11:24:22.493906 1732084 main.go:141] libmachine: (addons-010792) DBG |   
	I0127 11:24:22.493918 1732084 main.go:141] libmachine: (addons-010792) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 11:24:22.493949 1732084 main.go:141] libmachine: (addons-010792) DBG |     <dhcp>
	I0127 11:24:22.493959 1732084 main.go:141] libmachine: (addons-010792) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 11:24:22.493973 1732084 main.go:141] libmachine: (addons-010792) DBG |     </dhcp>
	I0127 11:24:22.493988 1732084 main.go:141] libmachine: (addons-010792) DBG |   </ip>
	I0127 11:24:22.493998 1732084 main.go:141] libmachine: (addons-010792) DBG |   
	I0127 11:24:22.494009 1732084 main.go:141] libmachine: (addons-010792) DBG | </network>
	I0127 11:24:22.494020 1732084 main.go:141] libmachine: (addons-010792) DBG | 
	I0127 11:24:22.499156 1732084 main.go:141] libmachine: (addons-010792) DBG | trying to create private KVM network mk-addons-010792 192.168.39.0/24...
	I0127 11:24:22.564204 1732084 main.go:141] libmachine: (addons-010792) DBG | private KVM network mk-addons-010792 192.168.39.0/24 created
	I0127 11:24:22.564254 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:22.564169 1732106 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:24:22.564266 1732084 main.go:141] libmachine: (addons-010792) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792 ...
	I0127 11:24:22.564301 1732084 main.go:141] libmachine: (addons-010792) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:24:22.564321 1732084 main.go:141] libmachine: (addons-010792) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:24:22.868797 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:22.868642 1732106 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa...
	I0127 11:24:23.040346 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:23.040193 1732106 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/addons-010792.rawdisk...
	I0127 11:24:23.040375 1732084 main.go:141] libmachine: (addons-010792) DBG | Writing magic tar header
	I0127 11:24:23.040385 1732084 main.go:141] libmachine: (addons-010792) DBG | Writing SSH key tar header
	I0127 11:24:23.040393 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:23.040314 1732106 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792 ...
	I0127 11:24:23.040404 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792
	I0127 11:24:23.040528 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792 (perms=drwx------)
	I0127 11:24:23.040566 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 11:24:23.040573 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 11:24:23.040583 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 11:24:23.040592 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 11:24:23.040603 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:24:23.040619 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 11:24:23.040637 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 11:24:23.040645 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 11:24:23.040652 1732084 main.go:141] libmachine: (addons-010792) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 11:24:23.040660 1732084 main.go:141] libmachine: (addons-010792) creating domain...
	I0127 11:24:23.040665 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home/jenkins
	I0127 11:24:23.040678 1732084 main.go:141] libmachine: (addons-010792) DBG | checking permissions on dir: /home
	I0127 11:24:23.040690 1732084 main.go:141] libmachine: (addons-010792) DBG | skipping /home - not owner
	I0127 11:24:23.041592 1732084 main.go:141] libmachine: (addons-010792) define libvirt domain using xml: 
	I0127 11:24:23.041617 1732084 main.go:141] libmachine: (addons-010792) <domain type='kvm'>
	I0127 11:24:23.041627 1732084 main.go:141] libmachine: (addons-010792)   <name>addons-010792</name>
	I0127 11:24:23.041633 1732084 main.go:141] libmachine: (addons-010792)   <memory unit='MiB'>4000</memory>
	I0127 11:24:23.041655 1732084 main.go:141] libmachine: (addons-010792)   <vcpu>2</vcpu>
	I0127 11:24:23.041671 1732084 main.go:141] libmachine: (addons-010792)   <features>
	I0127 11:24:23.041679 1732084 main.go:141] libmachine: (addons-010792)     <acpi/>
	I0127 11:24:23.041692 1732084 main.go:141] libmachine: (addons-010792)     <apic/>
	I0127 11:24:23.041726 1732084 main.go:141] libmachine: (addons-010792)     <pae/>
	I0127 11:24:23.041754 1732084 main.go:141] libmachine: (addons-010792)     
	I0127 11:24:23.041765 1732084 main.go:141] libmachine: (addons-010792)   </features>
	I0127 11:24:23.041795 1732084 main.go:141] libmachine: (addons-010792)   <cpu mode='host-passthrough'>
	I0127 11:24:23.041807 1732084 main.go:141] libmachine: (addons-010792)   
	I0127 11:24:23.041814 1732084 main.go:141] libmachine: (addons-010792)   </cpu>
	I0127 11:24:23.041823 1732084 main.go:141] libmachine: (addons-010792)   <os>
	I0127 11:24:23.041830 1732084 main.go:141] libmachine: (addons-010792)     <type>hvm</type>
	I0127 11:24:23.041838 1732084 main.go:141] libmachine: (addons-010792)     <boot dev='cdrom'/>
	I0127 11:24:23.041849 1732084 main.go:141] libmachine: (addons-010792)     <boot dev='hd'/>
	I0127 11:24:23.041874 1732084 main.go:141] libmachine: (addons-010792)     <bootmenu enable='no'/>
	I0127 11:24:23.041890 1732084 main.go:141] libmachine: (addons-010792)   </os>
	I0127 11:24:23.041902 1732084 main.go:141] libmachine: (addons-010792)   <devices>
	I0127 11:24:23.041917 1732084 main.go:141] libmachine: (addons-010792)     <disk type='file' device='cdrom'>
	I0127 11:24:23.041935 1732084 main.go:141] libmachine: (addons-010792)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/boot2docker.iso'/>
	I0127 11:24:23.041945 1732084 main.go:141] libmachine: (addons-010792)       <target dev='hdc' bus='scsi'/>
	I0127 11:24:23.041953 1732084 main.go:141] libmachine: (addons-010792)       <readonly/>
	I0127 11:24:23.041962 1732084 main.go:141] libmachine: (addons-010792)     </disk>
	I0127 11:24:23.041972 1732084 main.go:141] libmachine: (addons-010792)     <disk type='file' device='disk'>
	I0127 11:24:23.041981 1732084 main.go:141] libmachine: (addons-010792)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 11:24:23.042003 1732084 main.go:141] libmachine: (addons-010792)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/addons-010792.rawdisk'/>
	I0127 11:24:23.042065 1732084 main.go:141] libmachine: (addons-010792)       <target dev='hda' bus='virtio'/>
	I0127 11:24:23.042074 1732084 main.go:141] libmachine: (addons-010792)     </disk>
	I0127 11:24:23.042081 1732084 main.go:141] libmachine: (addons-010792)     <interface type='network'>
	I0127 11:24:23.042091 1732084 main.go:141] libmachine: (addons-010792)       <source network='mk-addons-010792'/>
	I0127 11:24:23.042110 1732084 main.go:141] libmachine: (addons-010792)       <model type='virtio'/>
	I0127 11:24:23.042123 1732084 main.go:141] libmachine: (addons-010792)     </interface>
	I0127 11:24:23.042132 1732084 main.go:141] libmachine: (addons-010792)     <interface type='network'>
	I0127 11:24:23.042145 1732084 main.go:141] libmachine: (addons-010792)       <source network='default'/>
	I0127 11:24:23.042157 1732084 main.go:141] libmachine: (addons-010792)       <model type='virtio'/>
	I0127 11:24:23.042174 1732084 main.go:141] libmachine: (addons-010792)     </interface>
	I0127 11:24:23.042185 1732084 main.go:141] libmachine: (addons-010792)     <serial type='pty'>
	I0127 11:24:23.042195 1732084 main.go:141] libmachine: (addons-010792)       <target port='0'/>
	I0127 11:24:23.042204 1732084 main.go:141] libmachine: (addons-010792)     </serial>
	I0127 11:24:23.042216 1732084 main.go:141] libmachine: (addons-010792)     <console type='pty'>
	I0127 11:24:23.042226 1732084 main.go:141] libmachine: (addons-010792)       <target type='serial' port='0'/>
	I0127 11:24:23.042236 1732084 main.go:141] libmachine: (addons-010792)     </console>
	I0127 11:24:23.042242 1732084 main.go:141] libmachine: (addons-010792)     <rng model='virtio'>
	I0127 11:24:23.042253 1732084 main.go:141] libmachine: (addons-010792)       <backend model='random'>/dev/random</backend>
	I0127 11:24:23.042264 1732084 main.go:141] libmachine: (addons-010792)     </rng>
	I0127 11:24:23.042275 1732084 main.go:141] libmachine: (addons-010792)     
	I0127 11:24:23.042282 1732084 main.go:141] libmachine: (addons-010792)     
	I0127 11:24:23.042292 1732084 main.go:141] libmachine: (addons-010792)   </devices>
	I0127 11:24:23.042299 1732084 main.go:141] libmachine: (addons-010792) </domain>
	I0127 11:24:23.042308 1732084 main.go:141] libmachine: (addons-010792) 
	I0127 11:24:23.046531 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:82:5e:78 in network default
	I0127 11:24:23.047106 1732084 main.go:141] libmachine: (addons-010792) starting domain...
	I0127 11:24:23.047125 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:23.047130 1732084 main.go:141] libmachine: (addons-010792) ensuring networks are active...
	I0127 11:24:23.047701 1732084 main.go:141] libmachine: (addons-010792) Ensuring network default is active
	I0127 11:24:23.047998 1732084 main.go:141] libmachine: (addons-010792) Ensuring network mk-addons-010792 is active
	I0127 11:24:23.048472 1732084 main.go:141] libmachine: (addons-010792) getting domain XML...
	I0127 11:24:23.049111 1732084 main.go:141] libmachine: (addons-010792) creating domain...
	I0127 11:24:24.228445 1732084 main.go:141] libmachine: (addons-010792) waiting for IP...
	I0127 11:24:24.229142 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:24.229493 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:24.229518 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:24.229479 1732106 retry.go:31] will retry after 298.632817ms: waiting for domain to come up
	I0127 11:24:24.530075 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:24.530533 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:24.530575 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:24.530508 1732106 retry.go:31] will retry after 253.153063ms: waiting for domain to come up
	I0127 11:24:24.784896 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:24.785327 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:24.785362 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:24.785268 1732106 retry.go:31] will retry after 383.090871ms: waiting for domain to come up
	I0127 11:24:25.169694 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:25.170039 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:25.170066 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:25.169998 1732106 retry.go:31] will retry after 557.532563ms: waiting for domain to come up
	I0127 11:24:25.728701 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:25.729159 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:25.729192 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:25.729124 1732106 retry.go:31] will retry after 759.705563ms: waiting for domain to come up
	I0127 11:24:26.490118 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:26.490403 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:26.490427 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:26.490372 1732106 retry.go:31] will retry after 927.097461ms: waiting for domain to come up
	I0127 11:24:27.418571 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:27.418966 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:27.419082 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:27.418974 1732106 retry.go:31] will retry after 1.084605617s: waiting for domain to come up
	I0127 11:24:28.504717 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:28.505114 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:28.505142 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:28.505069 1732106 retry.go:31] will retry after 1.399987952s: waiting for domain to come up
	I0127 11:24:29.906212 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:29.906665 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:29.906691 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:29.906629 1732106 retry.go:31] will retry after 1.494608018s: waiting for domain to come up
	I0127 11:24:31.403211 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:31.403675 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:31.403701 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:31.403621 1732106 retry.go:31] will retry after 1.763429659s: waiting for domain to come up
	I0127 11:24:33.168335 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:33.168748 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:33.168775 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:33.168718 1732106 retry.go:31] will retry after 1.967322099s: waiting for domain to come up
	I0127 11:24:35.137541 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:35.137930 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:35.137967 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:35.137898 1732106 retry.go:31] will retry after 3.294565177s: waiting for domain to come up
	I0127 11:24:38.433663 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:38.434180 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find current IP address of domain addons-010792 in network mk-addons-010792
	I0127 11:24:38.434213 1732084 main.go:141] libmachine: (addons-010792) DBG | I0127 11:24:38.434138 1732106 retry.go:31] will retry after 3.961374086s: waiting for domain to come up
	I0127 11:24:42.396665 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.397090 1732084 main.go:141] libmachine: (addons-010792) found domain IP: 192.168.39.45
	I0127 11:24:42.397116 1732084 main.go:141] libmachine: (addons-010792) reserving static IP address...
	I0127 11:24:42.397149 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has current primary IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.397479 1732084 main.go:141] libmachine: (addons-010792) DBG | unable to find host DHCP lease matching {name: "addons-010792", mac: "52:54:00:96:24:d7", ip: "192.168.39.45"} in network mk-addons-010792
	I0127 11:24:42.466830 1732084 main.go:141] libmachine: (addons-010792) DBG | Getting to WaitForSSH function...
	I0127 11:24:42.466869 1732084 main.go:141] libmachine: (addons-010792) reserved static IP address 192.168.39.45 for domain addons-010792
	I0127 11:24:42.466931 1732084 main.go:141] libmachine: (addons-010792) waiting for SSH...
	I0127 11:24:42.469097 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.469471 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:42.469500 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.469643 1732084 main.go:141] libmachine: (addons-010792) DBG | Using SSH client type: external
	I0127 11:24:42.469671 1732084 main.go:141] libmachine: (addons-010792) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa (-rw-------)
	I0127 11:24:42.469718 1732084 main.go:141] libmachine: (addons-010792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:24:42.469740 1732084 main.go:141] libmachine: (addons-010792) DBG | About to run SSH command:
	I0127 11:24:42.469754 1732084 main.go:141] libmachine: (addons-010792) DBG | exit 0
	I0127 11:24:42.594138 1732084 main.go:141] libmachine: (addons-010792) DBG | SSH cmd err, output: <nil>: 
	I0127 11:24:42.594390 1732084 main.go:141] libmachine: (addons-010792) KVM machine creation complete
	I0127 11:24:42.594638 1732084 main.go:141] libmachine: (addons-010792) Calling .GetConfigRaw
	I0127 11:24:42.631084 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:42.631338 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:42.631529 1732084 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 11:24:42.631541 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:24:42.632787 1732084 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 11:24:42.632800 1732084 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 11:24:42.632805 1732084 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 11:24:42.632811 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:42.635053 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.635445 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:42.635486 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.635651 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:42.635821 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.635980 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.636114 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:42.636326 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:42.636529 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:42.636543 1732084 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 11:24:42.745975 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:24:42.746006 1732084 main.go:141] libmachine: Detecting the provisioner...
	I0127 11:24:42.746017 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:42.748402 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.748865 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:42.748899 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.749121 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:42.749296 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.749474 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.749617 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:42.749807 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:42.750098 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:42.750119 1732084 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 11:24:42.863123 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 11:24:42.863193 1732084 main.go:141] libmachine: found compatible host: buildroot
	I0127 11:24:42.863205 1732084 main.go:141] libmachine: Provisioning with buildroot...
	I0127 11:24:42.863215 1732084 main.go:141] libmachine: (addons-010792) Calling .GetMachineName
	I0127 11:24:42.863435 1732084 buildroot.go:166] provisioning hostname "addons-010792"
	I0127 11:24:42.863470 1732084 main.go:141] libmachine: (addons-010792) Calling .GetMachineName
	I0127 11:24:42.863657 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:42.866317 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.866654 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:42.866684 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.866893 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:42.867061 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.867181 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.867280 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:42.867405 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:42.867616 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:42.867633 1732084 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-010792 && echo "addons-010792" | sudo tee /etc/hostname
	I0127 11:24:42.991377 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-010792
	
	I0127 11:24:42.991413 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:42.994011 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.994318 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:42.994352 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:42.994524 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:42.994724 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.994889 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:42.995018 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:42.995171 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:42.995436 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:42.995462 1732084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-010792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-010792/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-010792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:24:43.114476 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:24:43.114515 1732084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 11:24:43.114551 1732084 buildroot.go:174] setting up certificates
	I0127 11:24:43.114568 1732084 provision.go:84] configureAuth start
	I0127 11:24:43.114583 1732084 main.go:141] libmachine: (addons-010792) Calling .GetMachineName
	I0127 11:24:43.114933 1732084 main.go:141] libmachine: (addons-010792) Calling .GetIP
	I0127 11:24:43.117222 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.117619 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.117647 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.117782 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.119935 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.120246 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.120274 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.120382 1732084 provision.go:143] copyHostCerts
	I0127 11:24:43.120472 1732084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 11:24:43.120593 1732084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 11:24:43.120679 1732084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 11:24:43.120746 1732084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.addons-010792 san=[127.0.0.1 192.168.39.45 addons-010792 localhost minikube]
	I0127 11:24:43.249955 1732084 provision.go:177] copyRemoteCerts
	I0127 11:24:43.250032 1732084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:24:43.250102 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.252787 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.253138 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.253173 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.253362 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.253533 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.253709 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.253827 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:24:43.344297 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:24:43.366392 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:24:43.388309 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:24:43.409868 1732084 provision.go:87] duration metric: took 295.282298ms to configureAuth
	I0127 11:24:43.409907 1732084 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:24:43.410089 1732084 config.go:182] Loaded profile config "addons-010792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:24:43.410187 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.412755 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.413109 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.413134 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.413294 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.413505 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.413656 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.413880 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.414092 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:43.414276 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:43.414298 1732084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:24:43.632256 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:24:43.632292 1732084 main.go:141] libmachine: Checking connection to Docker...
	I0127 11:24:43.632303 1732084 main.go:141] libmachine: (addons-010792) Calling .GetURL
	I0127 11:24:43.633626 1732084 main.go:141] libmachine: (addons-010792) DBG | using libvirt version 6000000
	I0127 11:24:43.635590 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.635981 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.636014 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.636199 1732084 main.go:141] libmachine: Docker is up and running!
	I0127 11:24:43.636230 1732084 main.go:141] libmachine: Reticulating splines...
	I0127 11:24:43.636241 1732084 client.go:171] duration metric: took 21.379007212s to LocalClient.Create
	I0127 11:24:43.636271 1732084 start.go:167] duration metric: took 21.379090739s to libmachine.API.Create "addons-010792"
	I0127 11:24:43.636293 1732084 start.go:293] postStartSetup for "addons-010792" (driver="kvm2")
	I0127 11:24:43.636310 1732084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:24:43.636333 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:43.636608 1732084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:24:43.636634 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.638729 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.639056 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.639086 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.639194 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.639368 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.639490 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.639607 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:24:43.724482 1732084 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:24:43.728382 1732084 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:24:43.728413 1732084 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 11:24:43.728505 1732084 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 11:24:43.728542 1732084 start.go:296] duration metric: took 92.238576ms for postStartSetup
	I0127 11:24:43.728593 1732084 main.go:141] libmachine: (addons-010792) Calling .GetConfigRaw
	I0127 11:24:43.729183 1732084 main.go:141] libmachine: (addons-010792) Calling .GetIP
	I0127 11:24:43.731643 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.731949 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.731970 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.732221 1732084 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/config.json ...
	I0127 11:24:43.732448 1732084 start.go:128] duration metric: took 21.492910928s to createHost
	I0127 11:24:43.732481 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.734609 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.734885 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.734926 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.735061 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.735252 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.735386 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.735502 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.735637 1732084 main.go:141] libmachine: Using SSH client type: native
	I0127 11:24:43.735798 1732084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0127 11:24:43.735814 1732084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:24:43.846606 1732084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977083.819483167
	
	I0127 11:24:43.846632 1732084 fix.go:216] guest clock: 1737977083.819483167
	I0127 11:24:43.846638 1732084 fix.go:229] Guest: 2025-01-27 11:24:43.819483167 +0000 UTC Remote: 2025-01-27 11:24:43.732465194 +0000 UTC m=+21.593734765 (delta=87.017973ms)
	I0127 11:24:43.846683 1732084 fix.go:200] guest clock delta is within tolerance: 87.017973ms
	I0127 11:24:43.846691 1732084 start.go:83] releasing machines lock for "addons-010792", held for 21.607226713s
	I0127 11:24:43.846720 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:43.847012 1732084 main.go:141] libmachine: (addons-010792) Calling .GetIP
	I0127 11:24:43.849508 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.849840 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.849874 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.849953 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:43.850348 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:43.850499 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:24:43.850603 1732084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:24:43.850660 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.850722 1732084 ssh_runner.go:195] Run: cat /version.json
	I0127 11:24:43.850764 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:24:43.853220 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.853530 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.853558 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.853577 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.853681 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.853855 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.853974 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:43.853997 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:43.854005 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.854143 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:24:43.854158 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:24:43.854272 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:24:43.854396 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:24:43.854569 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:24:43.935185 1732084 ssh_runner.go:195] Run: systemctl --version
	I0127 11:24:43.964365 1732084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:24:44.118211 1732084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:24:44.123784 1732084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:24:44.123844 1732084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:24:44.138158 1732084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:24:44.138181 1732084 start.go:495] detecting cgroup driver to use...
	I0127 11:24:44.138236 1732084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:24:44.153592 1732084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:24:44.165489 1732084 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:24:44.165535 1732084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:24:44.177561 1732084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:24:44.189396 1732084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:24:44.293202 1732084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:24:44.422048 1732084 docker.go:233] disabling docker service ...
	I0127 11:24:44.422140 1732084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:24:44.435245 1732084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:24:44.446452 1732084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:24:44.576222 1732084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:24:44.692515 1732084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:24:44.705584 1732084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:24:44.722314 1732084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:24:44.722383 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.732124 1732084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:24:44.732197 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.741799 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.751320 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.760460 1732084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:24:44.769985 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.779184 1732084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.794576 1732084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:24:44.804053 1732084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:24:44.812408 1732084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:24:44.812447 1732084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:24:44.824606 1732084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:24:44.833554 1732084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:24:44.942101 1732084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:24:45.022946 1732084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:24:45.023050 1732084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:24:45.027199 1732084 start.go:563] Will wait 60s for crictl version
	I0127 11:24:45.027262 1732084 ssh_runner.go:195] Run: which crictl
	I0127 11:24:45.030543 1732084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:24:45.065728 1732084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:24:45.065818 1732084 ssh_runner.go:195] Run: crio --version
	I0127 11:24:45.090648 1732084 ssh_runner.go:195] Run: crio --version
	I0127 11:24:45.115967 1732084 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:24:45.117120 1732084 main.go:141] libmachine: (addons-010792) Calling .GetIP
	I0127 11:24:45.119470 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:45.119786 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:24:45.119811 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:24:45.119980 1732084 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 11:24:45.123396 1732084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:24:45.134442 1732084 kubeadm.go:883] updating cluster {Name:addons-010792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-010792 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:24:45.134562 1732084 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:24:45.134624 1732084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:24:45.162846 1732084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:24:45.162908 1732084 ssh_runner.go:195] Run: which lz4
	I0127 11:24:45.166139 1732084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:24:45.169570 1732084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:24:45.169594 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 11:24:46.306728 1732084 crio.go:462] duration metric: took 1.140616139s to copy over tarball
	I0127 11:24:46.306825 1732084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:24:48.304146 1732084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.997290181s)
	I0127 11:24:48.304187 1732084 crio.go:469] duration metric: took 1.997417047s to extract the tarball
	I0127 11:24:48.304197 1732084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:24:48.340293 1732084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:24:48.381599 1732084 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:24:48.381627 1732084 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:24:48.381635 1732084 kubeadm.go:934] updating node { 192.168.39.45 8443 v1.32.1 crio true true} ...
	I0127 11:24:48.381751 1732084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-010792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-010792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:24:48.381817 1732084 ssh_runner.go:195] Run: crio config
	I0127 11:24:48.423451 1732084 cni.go:84] Creating CNI manager for ""
	I0127 11:24:48.423475 1732084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:24:48.423486 1732084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:24:48.423512 1732084 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-010792 NodeName:addons-010792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:24:48.423673 1732084 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-010792"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.45"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:24:48.423747 1732084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:24:48.432753 1732084 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:24:48.432822 1732084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:24:48.441279 1732084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 11:24:48.455775 1732084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:24:48.469604 1732084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 11:24:48.484225 1732084 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I0127 11:24:48.487406 1732084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:24:48.497779 1732084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:24:48.594981 1732084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:24:48.609687 1732084 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792 for IP: 192.168.39.45
	I0127 11:24:48.609710 1732084 certs.go:194] generating shared ca certs ...
	I0127 11:24:48.609728 1732084 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:48.609886 1732084 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 11:24:48.800093 1732084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt ...
	I0127 11:24:48.800127 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt: {Name:mkcf2ddb267e2d27430268338c919792cd43cf8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:48.800326 1732084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key ...
	I0127 11:24:48.800342 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key: {Name:mk8bfde3c6ff61dc8b745d015749ced92599fbd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:48.800450 1732084 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 11:24:48.935572 1732084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt ...
	I0127 11:24:48.935604 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt: {Name:mk6d492e7eb4a87cb0ef896fe11d84a290e97313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:48.935794 1732084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key ...
	I0127 11:24:48.935811 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key: {Name:mkb0d52aa4785b18b5df0334585f3d452e9bbf70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:48.935918 1732084 certs.go:256] generating profile certs ...
	I0127 11:24:48.935982 1732084 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.key
	I0127 11:24:48.935998 1732084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt with IP's: []
	I0127 11:24:49.072992 1732084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt ...
	I0127 11:24:49.073027 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: {Name:mk35943e5b5d832704d98fdce5f1b88dd542e1b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.073217 1732084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.key ...
	I0127 11:24:49.073233 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.key: {Name:mka84cb9df9099d50954444be9fc4cfc56bee041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.073335 1732084 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key.d5892b90
	I0127 11:24:49.073355 1732084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt.d5892b90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.45]
	I0127 11:24:49.407495 1732084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt.d5892b90 ...
	I0127 11:24:49.407534 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt.d5892b90: {Name:mk19551a53b7ad5db86f9260c5d3c7e229d28463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.407697 1732084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key.d5892b90 ...
	I0127 11:24:49.407711 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key.d5892b90: {Name:mkceb19a0f65a5db797b80c120ee55efb4bb7955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.407781 1732084 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt.d5892b90 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt
	I0127 11:24:49.407882 1732084 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key.d5892b90 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key
	I0127 11:24:49.407939 1732084 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.key
	I0127 11:24:49.407959 1732084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.crt with IP's: []
	I0127 11:24:49.576617 1732084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.crt ...
	I0127 11:24:49.576652 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.crt: {Name:mk1d2233ca89a0069961dc29546477eb067666ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.576834 1732084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.key ...
	I0127 11:24:49.576848 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.key: {Name:mk4e0937987a5090b70be4b1b958349628e83c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:24:49.577021 1732084 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 11:24:49.577062 1732084 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:24:49.577087 1732084 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:24:49.577110 1732084 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 11:24:49.577673 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:24:49.600710 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 11:24:49.621964 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:24:49.643105 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:24:49.663288 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:24:49.683341 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:24:49.704833 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:24:49.726831 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:24:49.748831 1732084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:24:49.770796 1732084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:24:49.786585 1732084 ssh_runner.go:195] Run: openssl version
	I0127 11:24:49.791815 1732084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:24:49.802399 1732084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:24:49.806436 1732084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:24:49.806478 1732084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:24:49.811603 1732084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:24:49.822164 1732084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:24:49.825868 1732084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:24:49.825926 1732084 kubeadm.go:392] StartCluster: {Name:addons-010792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-010792 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:24:49.826021 1732084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:24:49.826067 1732084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:24:49.864416 1732084 cri.go:89] found id: ""
	I0127 11:24:49.864475 1732084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:24:49.874449 1732084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:24:49.883949 1732084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:24:49.893623 1732084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:24:49.893640 1732084 kubeadm.go:157] found existing configuration files:
	
	I0127 11:24:49.893677 1732084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:24:49.902708 1732084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:24:49.902759 1732084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:24:49.912053 1732084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:24:49.921495 1732084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:24:49.921537 1732084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:24:49.931104 1732084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:24:49.940392 1732084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:24:49.940451 1732084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:24:49.949704 1732084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:24:49.957633 1732084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:24:49.957692 1732084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:24:49.965806 1732084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:24:50.117103 1732084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:24:59.745581 1732084 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:24:59.745680 1732084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:24:59.745770 1732084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:24:59.745896 1732084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:24:59.746029 1732084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:24:59.746129 1732084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:24:59.747397 1732084 out.go:235]   - Generating certificates and keys ...
	I0127 11:24:59.747489 1732084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:24:59.747580 1732084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:24:59.747667 1732084 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:24:59.747721 1732084 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:24:59.747801 1732084 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:24:59.747884 1732084 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:24:59.747964 1732084 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:24:59.748102 1732084 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-010792 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0127 11:24:59.748189 1732084 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:24:59.748393 1732084 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-010792 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0127 11:24:59.748485 1732084 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:24:59.748558 1732084 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:24:59.748598 1732084 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:24:59.748646 1732084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:24:59.748693 1732084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:24:59.748750 1732084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:24:59.748823 1732084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:24:59.748875 1732084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:24:59.748919 1732084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:24:59.748999 1732084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:24:59.749095 1732084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:24:59.750193 1732084 out.go:235]   - Booting up control plane ...
	I0127 11:24:59.750289 1732084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:24:59.750378 1732084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:24:59.750487 1732084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:24:59.750636 1732084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:24:59.750789 1732084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:24:59.750831 1732084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:24:59.750938 1732084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:24:59.751032 1732084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:24:59.751086 1732084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.341883ms
	I0127 11:24:59.751150 1732084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:24:59.751202 1732084 kubeadm.go:310] [api-check] The API server is healthy after 4.502663308s
	I0127 11:24:59.751291 1732084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:24:59.751397 1732084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:24:59.751446 1732084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:24:59.751592 1732084 kubeadm.go:310] [mark-control-plane] Marking the node addons-010792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:24:59.751639 1732084 kubeadm.go:310] [bootstrap-token] Using token: 2kkkjr.srvfxz1mqz0tqefu
	I0127 11:24:59.752962 1732084 out.go:235]   - Configuring RBAC rules ...
	I0127 11:24:59.753056 1732084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:24:59.753134 1732084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:24:59.753288 1732084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:24:59.753447 1732084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:24:59.753606 1732084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:24:59.753717 1732084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:24:59.753889 1732084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:24:59.753933 1732084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:24:59.753972 1732084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:24:59.753981 1732084 kubeadm.go:310] 
	I0127 11:24:59.754030 1732084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:24:59.754036 1732084 kubeadm.go:310] 
	I0127 11:24:59.754131 1732084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:24:59.754141 1732084 kubeadm.go:310] 
	I0127 11:24:59.754175 1732084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:24:59.754260 1732084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:24:59.754304 1732084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:24:59.754310 1732084 kubeadm.go:310] 
	I0127 11:24:59.754382 1732084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:24:59.754391 1732084 kubeadm.go:310] 
	I0127 11:24:59.754467 1732084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:24:59.754477 1732084 kubeadm.go:310] 
	I0127 11:24:59.754550 1732084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:24:59.754647 1732084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:24:59.754734 1732084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:24:59.754754 1732084 kubeadm.go:310] 
	I0127 11:24:59.754829 1732084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:24:59.754937 1732084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:24:59.754950 1732084 kubeadm.go:310] 
	I0127 11:24:59.755061 1732084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2kkkjr.srvfxz1mqz0tqefu \
	I0127 11:24:59.755226 1732084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 11:24:59.755251 1732084 kubeadm.go:310] 	--control-plane 
	I0127 11:24:59.755259 1732084 kubeadm.go:310] 
	I0127 11:24:59.755379 1732084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:24:59.755390 1732084 kubeadm.go:310] 
	I0127 11:24:59.755502 1732084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2kkkjr.srvfxz1mqz0tqefu \
	I0127 11:24:59.755661 1732084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 11:24:59.755679 1732084 cni.go:84] Creating CNI manager for ""
	I0127 11:24:59.755688 1732084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:24:59.757042 1732084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:24:59.758003 1732084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:24:59.767584 1732084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:24:59.784552 1732084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:24:59.784642 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:24:59.784686 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-010792 minikube.k8s.io/updated_at=2025_01_27T11_24_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=addons-010792 minikube.k8s.io/primary=true
	I0127 11:24:59.817054 1732084 ops.go:34] apiserver oom_adj: -16
	I0127 11:24:59.929814 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:00.430442 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:00.930388 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:01.429975 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:01.930409 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:02.430608 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:02.930481 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:03.430622 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:03.930455 1732084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:25:04.009838 1732084 kubeadm.go:1113] duration metric: took 4.225250666s to wait for elevateKubeSystemPrivileges
	I0127 11:25:04.009889 1732084 kubeadm.go:394] duration metric: took 14.183969839s to StartCluster
	I0127 11:25:04.009912 1732084 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:25:04.010076 1732084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:25:04.010771 1732084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:25:04.011036 1732084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:25:04.011080 1732084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:25:04.011143 1732084 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 11:25:04.011297 1732084 addons.go:69] Setting yakd=true in profile "addons-010792"
	I0127 11:25:04.011308 1732084 addons.go:69] Setting inspektor-gadget=true in profile "addons-010792"
	I0127 11:25:04.011328 1732084 addons.go:238] Setting addon yakd=true in "addons-010792"
	I0127 11:25:04.011334 1732084 addons.go:238] Setting addon inspektor-gadget=true in "addons-010792"
	I0127 11:25:04.011349 1732084 config.go:182] Loaded profile config "addons-010792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:25:04.011364 1732084 addons.go:69] Setting cloud-spanner=true in profile "addons-010792"
	I0127 11:25:04.011372 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011375 1732084 addons.go:69] Setting ingress-dns=true in profile "addons-010792"
	I0127 11:25:04.011388 1732084 addons.go:238] Setting addon cloud-spanner=true in "addons-010792"
	I0127 11:25:04.011395 1732084 addons.go:238] Setting addon ingress-dns=true in "addons-010792"
	I0127 11:25:04.011338 1732084 addons.go:69] Setting default-storageclass=true in profile "addons-010792"
	I0127 11:25:04.011403 1732084 addons.go:69] Setting gcp-auth=true in profile "addons-010792"
	I0127 11:25:04.011415 1732084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-010792"
	I0127 11:25:04.011410 1732084 addons.go:69] Setting storage-provisioner=true in profile "addons-010792"
	I0127 11:25:04.011424 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011425 1732084 mustload.go:65] Loading cluster: addons-010792
	I0127 11:25:04.011368 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011449 1732084 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-010792"
	I0127 11:25:04.011465 1732084 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-010792"
	I0127 11:25:04.011508 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011601 1732084 config.go:182] Loaded profile config "addons-010792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:25:04.011758 1732084 addons.go:69] Setting volcano=true in profile "addons-010792"
	I0127 11:25:04.011799 1732084 addons.go:238] Setting addon volcano=true in "addons-010792"
	I0127 11:25:04.011837 1732084 addons.go:69] Setting registry=true in profile "addons-010792"
	I0127 11:25:04.011359 1732084 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-010792"
	I0127 11:25:04.011865 1732084 addons.go:69] Setting volumesnapshots=true in profile "addons-010792"
	I0127 11:25:04.011880 1732084 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-010792"
	I0127 11:25:04.011879 1732084 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-010792"
	I0127 11:25:04.011890 1732084 addons.go:238] Setting addon volumesnapshots=true in "addons-010792"
	I0127 11:25:04.011896 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011898 1732084 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-010792"
	I0127 11:25:04.011916 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011924 1732084 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-010792"
	I0127 11:25:04.011948 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011427 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011432 1732084 addons.go:238] Setting addon storage-provisioner=true in "addons-010792"
	I0127 11:25:04.011955 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.011981 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011985 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.011849 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012280 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.011867 1732084 addons.go:238] Setting addon registry=true in "addons-010792"
	I0127 11:25:04.012291 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012300 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012307 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012313 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012315 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012325 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012342 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012351 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012360 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012360 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.011439 1732084 addons.go:69] Setting metrics-server=true in profile "addons-010792"
	I0127 11:25:04.012399 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012416 1732084 addons.go:238] Setting addon metrics-server=true in "addons-010792"
	I0127 11:25:04.011387 1732084 addons.go:69] Setting ingress=true in profile "addons-010792"
	I0127 11:25:04.011850 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.011947 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.011872 1732084 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-010792"
	I0127 11:25:04.012434 1732084 addons.go:238] Setting addon ingress=true in "addons-010792"
	I0127 11:25:04.011868 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.012459 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012472 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012501 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012634 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.012907 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.012314 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.012998 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.011874 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.013073 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.013238 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.013269 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.013704 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.014241 1732084 out.go:177] * Verifying Kubernetes components...
	I0127 11:25:04.015882 1732084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:25:04.032359 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0127 11:25:04.039425 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.039471 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.039614 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.039657 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.039897 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37421
	I0127 11:25:04.040006 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.040050 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
	I0127 11:25:04.039427 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.040184 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.050564 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.050817 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.050835 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.050917 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.051274 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.051294 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.051363 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.051979 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.052025 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.052360 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.052376 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.052434 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.052829 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.053272 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.053307 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.053933 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.053972 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.077273 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0127 11:25:04.078010 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.078530 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0127 11:25:04.078780 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0127 11:25:04.079451 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.079523 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.079615 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.079636 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.080060 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.080088 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.080156 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.080473 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.080826 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0127 11:25:04.080905 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.080920 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.080945 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.080982 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.081212 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.081313 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.081677 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0127 11:25:04.082010 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.082044 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.082171 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.082202 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.082261 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0127 11:25:04.082712 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.082781 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0127 11:25:04.082716 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.082901 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.082925 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.083314 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.083358 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.083374 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.083443 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.083477 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.083497 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.083831 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.084042 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.084059 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.084097 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.084757 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.084781 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.085137 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.085662 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.085686 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.086006 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I0127 11:25:04.086164 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.086397 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.086727 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.087234 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.087251 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.087600 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.088178 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.088201 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.088350 1732084 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-010792"
	I0127 11:25:04.088399 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.088419 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0127 11:25:04.088753 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.088777 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.088853 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0127 11:25:04.088942 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.089568 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.089587 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.090032 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.090574 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.090610 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.091095 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0127 11:25:04.103039 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0127 11:25:04.103588 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.104310 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.104331 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.105250 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0127 11:25:04.105900 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.106537 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.106554 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.107018 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.107242 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.108569 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0127 11:25:04.109135 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.109247 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.109280 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.109639 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.109721 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.109755 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.110339 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.110360 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.110788 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.111056 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.111319 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.112566 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.114727 1732084 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 11:25:04.114963 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0127 11:25:04.114727 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 11:25:04.115162 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0127 11:25:04.115576 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.115701 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.116105 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.116127 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.116258 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.116269 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.116385 1732084 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 11:25:04.116404 1732084 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 11:25:04.116427 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.116912 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.116994 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.117510 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.117553 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.117700 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.117942 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 11:25:04.118596 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0127 11:25:04.119989 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.120097 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.120105 1732084 addons.go:238] Setting addon default-storageclass=true in "addons-010792"
	I0127 11:25:04.120146 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:04.120464 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 11:25:04.120520 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.120555 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.120649 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.120749 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.120803 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.120863 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.120883 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.121470 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.121491 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.121541 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.121602 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.121995 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.122354 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.122399 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.122564 1732084 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 11:25:04.123216 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.123238 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.122700 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.122804 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.123327 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.123486 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 11:25:04.123871 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.123929 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.123974 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.124260 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.124287 1732084 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 11:25:04.124304 1732084 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 11:25:04.124321 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.125403 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.125432 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.126103 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 11:25:04.127309 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.127346 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 11:25:04.128520 1732084 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 11:25:04.128599 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 11:25:04.128947 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.129377 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.129407 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.129671 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.129785 1732084 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 11:25:04.129809 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 11:25:04.129834 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.130012 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.130251 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.130367 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.131609 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0127 11:25:04.131968 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 11:25:04.132091 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.132607 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.132634 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.133003 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.133163 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 11:25:04.133185 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 11:25:04.133210 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.133211 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.133509 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.135912 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0127 11:25:04.136340 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.136843 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.136877 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.137239 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.137317 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.137635 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.138603 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.138869 1732084 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 11:25:04.139880 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.140009 1732084 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:25:04.140023 1732084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:25:04.140041 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.140816 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.140845 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.140868 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.140879 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.140943 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.141028 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.141092 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.141261 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.141334 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.141493 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.141555 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.141968 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.142252 1732084 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 11:25:04.143587 1732084 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:25:04.144545 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.144763 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.144785 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.144814 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0127 11:25:04.145049 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.145278 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.145361 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.145604 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.145798 1732084 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:25:04.145908 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.146816 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.147096 1732084 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 11:25:04.147111 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 11:25:04.147127 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.147675 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.148499 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.148791 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.150816 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.151357 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.151808 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.151829 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.152121 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.152331 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.152385 1732084 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 11:25:04.152537 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.152661 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.153303 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36327
	I0127 11:25:04.153427 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42163
	I0127 11:25:04.153609 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0127 11:25:04.153741 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.154023 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.154183 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.154195 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.154253 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.154514 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.154828 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.154932 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.154954 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.155267 1732084 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 11:25:04.155380 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.155408 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.155778 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.155975 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.156160 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.156492 1732084 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 11:25:04.156510 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 11:25:04.156529 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.156597 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.157215 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.157673 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
	I0127 11:25:04.158247 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.158833 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.158857 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.159399 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.159538 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.159690 1732084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:25:04.160373 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0127 11:25:04.160592 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.160708 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.161121 1732084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:25:04.161138 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:25:04.161154 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.161219 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.161276 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.161290 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.161374 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.161540 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.161665 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.162669 1732084 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 11:25:04.163314 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.163616 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:04.163628 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:04.163908 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 11:25:04.163927 1732084 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 11:25:04.163947 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.164002 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.165721 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:04.165764 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:04.165776 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:04.165789 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:04.165795 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:04.165797 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.165812 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.165816 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.166134 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:04.166150 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:04.166156 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0127 11:25:04.166178 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.166242 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.166262 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	W0127 11:25:04.166247 1732084 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 11:25:04.166890 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.166917 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.167145 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.167245 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0127 11:25:04.167982 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.168460 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.168474 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.168925 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.168978 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.169234 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0127 11:25:04.169389 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.169406 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.169480 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.169570 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:04.169613 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:04.169712 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.169734 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.170174 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.170189 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.170282 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.170339 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.170523 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.170687 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.171125 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.171146 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.171329 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.171342 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.171720 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.171900 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.173462 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.174557 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.174807 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0127 11:25:04.174815 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.175075 1732084 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 11:25:04.175512 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.176122 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.176139 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.176389 1732084 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 11:25:04.176409 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 11:25:04.176417 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.176428 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.176504 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.176943 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.178111 1732084 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 11:25:04.178836 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.179342 1732084 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 11:25:04.179360 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 11:25:04.179377 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.179638 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.180023 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.180046 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.180190 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.180434 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.180618 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.180764 1732084 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 11:25:04.180817 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.181988 1732084 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 11:25:04.182010 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 11:25:04.182028 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.183264 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.183596 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.183622 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.183864 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.184052 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.184182 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.184290 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.185931 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.186313 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.186336 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.186492 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.186644 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.186824 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.186916 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0127 11:25:04.187079 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.187288 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.187688 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.187707 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.188186 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.188375 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	W0127 11:25:04.188588 1732084 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54286->192.168.39.45:22: read: connection reset by peer
	I0127 11:25:04.188614 1732084 retry.go:31] will retry after 129.015575ms: ssh: handshake failed: read tcp 192.168.39.1:54286->192.168.39.45:22: read: connection reset by peer
	I0127 11:25:04.190175 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.192055 1732084 out.go:177]   - Using image docker.io/busybox:stable
	I0127 11:25:04.193440 1732084 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 11:25:04.194830 1732084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 11:25:04.194851 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 11:25:04.194865 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.197312 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.197614 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.197633 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.197822 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.198013 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.198175 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.198286 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:04.201149 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0127 11:25:04.201674 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:04.202255 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:04.202276 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:04.202623 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:04.202964 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:04.204261 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:04.204478 1732084 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:25:04.204489 1732084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:25:04.204500 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:04.207977 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.208467 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:04.208479 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:04.208794 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:04.208968 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:04.209106 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:04.209234 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	W0127 11:25:04.320573 1732084 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54324->192.168.39.45:22: read: connection reset by peer
	I0127 11:25:04.320605 1732084 retry.go:31] will retry after 435.433539ms: ssh: handshake failed: read tcp 192.168.39.1:54324->192.168.39.45:22: read: connection reset by peer
	I0127 11:25:04.331324 1732084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:25:04.331379 1732084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:25:04.458685 1732084 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 11:25:04.458709 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 11:25:04.461380 1732084 node_ready.go:35] waiting up to 6m0s for node "addons-010792" to be "Ready" ...
	I0127 11:25:04.464748 1732084 node_ready.go:49] node "addons-010792" has status "Ready":"True"
	I0127 11:25:04.464780 1732084 node_ready.go:38] duration metric: took 3.365665ms for node "addons-010792" to be "Ready" ...
	I0127 11:25:04.464789 1732084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:25:04.472565 1732084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:04.521151 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 11:25:04.541065 1732084 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 11:25:04.541093 1732084 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 11:25:04.563088 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:25:04.608410 1732084 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:25:04.608433 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 11:25:04.619054 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 11:25:04.635592 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 11:25:04.635985 1732084 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 11:25:04.636005 1732084 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 11:25:04.668529 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 11:25:04.675278 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 11:25:04.719196 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 11:25:04.719228 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 11:25:04.739074 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 11:25:04.745202 1732084 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 11:25:04.745233 1732084 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 11:25:04.805879 1732084 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 11:25:04.805910 1732084 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 11:25:04.836308 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:25:04.868174 1732084 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:25:04.868213 1732084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:25:04.884849 1732084 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 11:25:04.884885 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 11:25:04.910409 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 11:25:04.910443 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 11:25:04.925597 1732084 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 11:25:04.925632 1732084 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 11:25:04.941636 1732084 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 11:25:04.941665 1732084 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 11:25:05.097706 1732084 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:25:05.097743 1732084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:25:05.099062 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 11:25:05.151861 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 11:25:05.151903 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 11:25:05.179693 1732084 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 11:25:05.179724 1732084 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 11:25:05.231576 1732084 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 11:25:05.231604 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 11:25:05.317626 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:25:05.333383 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 11:25:05.336145 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 11:25:05.336171 1732084 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 11:25:05.394910 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 11:25:05.394956 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 11:25:05.398732 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 11:25:05.541371 1732084 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:25:05.541399 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 11:25:05.619173 1732084 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 11:25:05.619214 1732084 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 11:25:05.796834 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 11:25:05.796863 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 11:25:05.800783 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:25:06.115209 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 11:25:06.115238 1732084 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 11:25:06.289462 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 11:25:06.289487 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 11:25:06.354034 1732084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.022619138s)
	I0127 11:25:06.354073 1732084 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 11:25:06.493324 1732084 pod_ready.go:103] pod "etcd-addons-010792" in "kube-system" namespace has status "Ready":"False"
	I0127 11:25:06.540698 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 11:25:06.540723 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 11:25:06.865887 1732084 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-010792" context rescaled to 1 replicas
	I0127 11:25:06.877993 1732084 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 11:25:06.878027 1732084 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 11:25:07.031751 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 11:25:08.828962 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.307759183s)
	I0127 11:25:08.829028 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.265901602s)
	I0127 11:25:08.829033 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829073 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829090 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829100 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.210021723s)
	I0127 11:25:08.829125 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829142 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829092 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829165 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.193543676s)
	I0127 11:25:08.829193 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829201 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829639 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:08.829649 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.829659 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.829668 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829668 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.829676 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829701 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.829713 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829711 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.829722 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829722 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.829731 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829738 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829670 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.829779 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.829789 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:08.829796 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:08.829973 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:08.830027 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.830050 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.830110 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:08.830150 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:08.830183 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.830306 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:08.830311 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.830374 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.830395 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:08.831727 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:08.831749 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:09.002362 1732084 pod_ready.go:103] pod "etcd-addons-010792" in "kube-system" namespace has status "Ready":"False"
	I0127 11:25:09.174094 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.505521121s)
	I0127 11:25:09.174166 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:09.174184 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:09.174455 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:09.174475 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:09.174491 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:09.174500 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:09.174755 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:09.174796 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:09.174807 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:09.267243 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:09.267276 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:09.267620 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:09.267632 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:09.267651 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:10.965697 1732084 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 11:25:10.965756 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:10.969233 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:10.969756 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:10.969795 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:10.970024 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:10.970267 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:10.970427 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:10.970646 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:11.211140 1732084 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 11:25:11.381673 1732084 addons.go:238] Setting addon gcp-auth=true in "addons-010792"
	I0127 11:25:11.381741 1732084 host.go:66] Checking if "addons-010792" exists ...
	I0127 11:25:11.382079 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:11.382139 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:11.397222 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33353
	I0127 11:25:11.397655 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:11.398131 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:11.398155 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:11.398488 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:11.399022 1732084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:25:11.399075 1732084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:25:11.414425 1732084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I0127 11:25:11.415033 1732084 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:25:11.415622 1732084 main.go:141] libmachine: Using API Version  1
	I0127 11:25:11.415647 1732084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:25:11.415973 1732084 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:25:11.416180 1732084 main.go:141] libmachine: (addons-010792) Calling .GetState
	I0127 11:25:11.417871 1732084 main.go:141] libmachine: (addons-010792) Calling .DriverName
	I0127 11:25:11.418111 1732084 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 11:25:11.418133 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHHostname
	I0127 11:25:11.420968 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:11.421395 1732084 main.go:141] libmachine: (addons-010792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:24:d7", ip: ""} in network mk-addons-010792: {Iface:virbr1 ExpiryTime:2025-01-27 12:24:36 +0000 UTC Type:0 Mac:52:54:00:96:24:d7 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-010792 Clientid:01:52:54:00:96:24:d7}
	I0127 11:25:11.421425 1732084 main.go:141] libmachine: (addons-010792) DBG | domain addons-010792 has defined IP address 192.168.39.45 and MAC address 52:54:00:96:24:d7 in network mk-addons-010792
	I0127 11:25:11.421679 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHPort
	I0127 11:25:11.421865 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHKeyPath
	I0127 11:25:11.422002 1732084 main.go:141] libmachine: (addons-010792) Calling .GetSSHUsername
	I0127 11:25:11.422133 1732084 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/addons-010792/id_rsa Username:docker}
	I0127 11:25:11.478556 1732084 pod_ready.go:103] pod "etcd-addons-010792" in "kube-system" namespace has status "Ready":"False"
	I0127 11:25:12.122724 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.447404096s)
	I0127 11:25:12.122790 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.383678641s)
	I0127 11:25:12.122821 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.122836 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.122844 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.286502377s)
	I0127 11:25:12.122881 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.122889 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.023801714s)
	I0127 11:25:12.122838 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.122897 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.122915 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.122929 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.122974 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.805313809s)
	I0127 11:25:12.122912 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123000 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123012 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123036 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.789617045s)
	I0127 11:25:12.123059 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123070 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123091 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.724322272s)
	I0127 11:25:12.123108 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123117 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123237 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.322422117s)
	W0127 11:25:12.123266 1732084 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 11:25:12.123289 1732084 retry.go:31] will retry after 211.980677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 11:25:12.123753 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.123754 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.123781 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.123784 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.123794 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.123799 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.123803 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.123804 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.123809 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.123814 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123818 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123823 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123827 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123814 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123876 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123881 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.123888 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.123895 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123901 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.123939 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.123953 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.123971 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.123978 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.123984 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.123990 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.124027 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.124037 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.124044 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.124050 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.124086 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.124107 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.124116 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.124118 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.124124 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.124131 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.124144 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.124151 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.125584 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.125613 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.125619 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.125629 1732084 addons.go:479] Verifying addon metrics-server=true in "addons-010792"
	I0127 11:25:12.126609 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.126645 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.126653 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.126797 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.126823 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.126830 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.126940 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.126951 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.126960 1732084 addons.go:479] Verifying addon ingress=true in "addons-010792"
	I0127 11:25:12.127066 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.127085 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.127124 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.127131 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.126912 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.127207 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.127220 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.127228 1732084 addons.go:479] Verifying addon registry=true in "addons-010792"
	I0127 11:25:12.128187 1732084 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-010792 service yakd-dashboard -n yakd-dashboard
	
	I0127 11:25:12.130018 1732084 out.go:177] * Verifying registry addon...
	I0127 11:25:12.130029 1732084 out.go:177] * Verifying ingress addon...
	I0127 11:25:12.131963 1732084 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 11:25:12.132201 1732084 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 11:25:12.149065 1732084 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 11:25:12.149090 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:12.149239 1732084 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 11:25:12.149253 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:12.156656 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:12.156678 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:12.156983 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:12.157029 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:12.157050 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:12.335729 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:25:12.642846 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:12.643067 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:13.000129 1732084 pod_ready.go:93] pod "etcd-addons-010792" in "kube-system" namespace has status "Ready":"True"
	I0127 11:25:13.000155 1732084 pod_ready.go:82] duration metric: took 8.527564821s for pod "etcd-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.000170 1732084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.049196 1732084 pod_ready.go:93] pod "kube-apiserver-addons-010792" in "kube-system" namespace has status "Ready":"True"
	I0127 11:25:13.049226 1732084 pod_ready.go:82] duration metric: took 49.04715ms for pod "kube-apiserver-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.049240 1732084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.071250 1732084 pod_ready.go:93] pod "kube-controller-manager-addons-010792" in "kube-system" namespace has status "Ready":"True"
	I0127 11:25:13.071276 1732084 pod_ready.go:82] duration metric: took 22.026999ms for pod "kube-controller-manager-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.071290 1732084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-657tw" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.106827 1732084 pod_ready.go:93] pod "kube-proxy-657tw" in "kube-system" namespace has status "Ready":"True"
	I0127 11:25:13.106855 1732084 pod_ready.go:82] duration metric: took 35.55589ms for pod "kube-proxy-657tw" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.106866 1732084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.107074 1732084 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.688945334s)
	I0127 11:25:13.107075 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.075271179s)
	I0127 11:25:13.107123 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:13.107145 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:13.107425 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:13.107487 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:13.107507 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:13.107521 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:13.107534 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:13.107770 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:13.107791 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:13.107803 1732084 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-010792"
	I0127 11:25:13.108375 1732084 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:25:13.109215 1732084 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 11:25:13.110659 1732084 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 11:25:13.111528 1732084 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 11:25:13.112081 1732084 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 11:25:13.112107 1732084 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 11:25:13.142209 1732084 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 11:25:13.142238 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:13.148392 1732084 pod_ready.go:93] pod "kube-scheduler-addons-010792" in "kube-system" namespace has status "Ready":"True"
	I0127 11:25:13.148422 1732084 pod_ready.go:82] duration metric: took 41.547629ms for pod "kube-scheduler-addons-010792" in "kube-system" namespace to be "Ready" ...
	I0127 11:25:13.148440 1732084 pod_ready.go:39] duration metric: took 8.683634504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:25:13.148462 1732084 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:25:13.148536 1732084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:25:13.168528 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:13.168678 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:13.255669 1732084 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 11:25:13.255697 1732084 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 11:25:13.428565 1732084 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 11:25:13.428591 1732084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 11:25:13.457407 1732084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 11:25:13.616347 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:13.634898 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:13.636682 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:14.049496 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713708163s)
	I0127 11:25:14.049562 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:14.049582 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:14.049618 1732084 api_server.go:72] duration metric: took 10.03848454s to wait for apiserver process to appear ...
	I0127 11:25:14.049650 1732084 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:25:14.049676 1732084 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0127 11:25:14.049926 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:14.049980 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:14.050001 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:14.050017 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:14.050029 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:14.050265 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:14.050353 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:14.050331 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:14.054312 1732084 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0127 11:25:14.055183 1732084 api_server.go:141] control plane version: v1.32.1
	I0127 11:25:14.055211 1732084 api_server.go:131] duration metric: took 5.551047ms to wait for apiserver health ...
	I0127 11:25:14.055221 1732084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:25:14.067140 1732084 system_pods.go:59] 19 kube-system pods found
	I0127 11:25:14.067189 1732084 system_pods.go:61] "amd-gpu-device-plugin-lt7hj" [60442dee-924f-45d7-b51b-92bb1a51d828] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0127 11:25:14.067199 1732084 system_pods.go:61] "coredns-668d6bf9bc-dvzt2" [7d8159fb-e4e1-401a-a9fb-6d42bc4d838a] Running
	I0127 11:25:14.067213 1732084 system_pods.go:61] "coredns-668d6bf9bc-ms98v" [8081f480-cfa3-4107-a26e-4a51503b0d52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:25:14.067226 1732084 system_pods.go:61] "csi-hostpath-attacher-0" [8604659e-d317-472a-9e10-99b36b480577] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 11:25:14.067242 1732084 system_pods.go:61] "csi-hostpath-resizer-0" [6e4e3ef7-024e-4192-8273-bb196902ecbc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 11:25:14.067253 1732084 system_pods.go:61] "csi-hostpathplugin-dkmxd" [c9af4923-d14a-41e3-912e-f43e52a0ff79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 11:25:14.067262 1732084 system_pods.go:61] "etcd-addons-010792" [db953eb5-dadf-4054-b695-cd51eb337c5d] Running
	I0127 11:25:14.067271 1732084 system_pods.go:61] "kube-apiserver-addons-010792" [63e6a660-eb37-405c-b873-8d153773960b] Running
	I0127 11:25:14.067280 1732084 system_pods.go:61] "kube-controller-manager-addons-010792" [aa130798-d515-412b-8dce-1838c0e4b59b] Running
	I0127 11:25:14.067288 1732084 system_pods.go:61] "kube-ingress-dns-minikube" [5fef84bd-ed4e-4a26-9793-a9e515f5c005] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0127 11:25:14.067297 1732084 system_pods.go:61] "kube-proxy-657tw" [54a2b574-ac90-4b6a-b1cc-2ce30a926b4a] Running
	I0127 11:25:14.067303 1732084 system_pods.go:61] "kube-scheduler-addons-010792" [21fd202f-d702-4081-80f7-1595200ffb55] Running
	I0127 11:25:14.067311 1732084 system_pods.go:61] "metrics-server-7fbb699795-sqf8v" [df1191e5-c231-47c1-8b91-358cc72cdd03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:25:14.067323 1732084 system_pods.go:61] "nvidia-device-plugin-daemonset-sdq9s" [aae0f9ef-186d-454f-bead-016953edfdbe] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0127 11:25:14.067335 1732084 system_pods.go:61] "registry-6c88467877-2rdf7" [3d0c6731-428a-4f72-bdcf-d9af53b4e161] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 11:25:14.067346 1732084 system_pods.go:61] "registry-proxy-nr6jq" [ce9ee54b-9eed-45c9-897f-850f5632d1a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 11:25:14.067357 1732084 system_pods.go:61] "snapshot-controller-68b874b76f-g4pjn" [22425198-8f19-4758-9912-17482ac17e97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 11:25:14.067406 1732084 system_pods.go:61] "snapshot-controller-68b874b76f-xq57j" [0c32a36a-26ad-4480-a31a-cde35caa99ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 11:25:14.067418 1732084 system_pods.go:61] "storage-provisioner" [b027264b-8471-4994-9afc-0a96016c98f3] Running
	I0127 11:25:14.067425 1732084 system_pods.go:74] duration metric: took 12.197615ms to wait for pod list to return data ...
	I0127 11:25:14.067433 1732084 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:25:14.069270 1732084 default_sa.go:45] found service account: "default"
	I0127 11:25:14.069286 1732084 default_sa.go:55] duration metric: took 1.846448ms for default service account to be created ...
	I0127 11:25:14.069293 1732084 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:25:14.079081 1732084 system_pods.go:87] 19 kube-system pods found
	I0127 11:25:14.115735 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:14.136085 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:14.137791 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:14.723691 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:14.723834 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:14.723853 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:14.768438 1732084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.310970354s)
	I0127 11:25:14.768513 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:14.768537 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:14.768846 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:14.768863 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:14.768872 1732084 main.go:141] libmachine: Making call to close driver server
	I0127 11:25:14.768880 1732084 main.go:141] libmachine: (addons-010792) Calling .Close
	I0127 11:25:14.768883 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:14.769141 1732084 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:25:14.769167 1732084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:25:14.769188 1732084 main.go:141] libmachine: (addons-010792) DBG | Closing plugin on server side
	I0127 11:25:14.770052 1732084 addons.go:479] Verifying addon gcp-auth=true in "addons-010792"
	I0127 11:25:14.772490 1732084 out.go:177] * Verifying gcp-auth addon...
	I0127 11:25:14.774724 1732084 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 11:25:14.808340 1732084 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 11:25:14.808371 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:15.119682 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:15.135181 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:15.138585 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:15.278039 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:15.616760 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:15.636461 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:15.638596 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:15.877506 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:16.117575 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:16.137651 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:16.138095 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:16.278392 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:16.617016 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:16.635578 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:16.636351 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:16.777741 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:17.118763 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:17.137255 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:17.137951 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:17.278941 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:17.583621 1732084 system_pods.go:105] "amd-gpu-device-plugin-lt7hj" [60442dee-924f-45d7-b51b-92bb1a51d828] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0127 11:25:17.583652 1732084 system_pods.go:105] "coredns-668d6bf9bc-dvzt2" [7d8159fb-e4e1-401a-a9fb-6d42bc4d838a] Running
	I0127 11:25:17.583668 1732084 system_pods.go:105] "coredns-668d6bf9bc-ms98v" [8081f480-cfa3-4107-a26e-4a51503b0d52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:25:17.583682 1732084 system_pods.go:105] "csi-hostpath-attacher-0" [8604659e-d317-472a-9e10-99b36b480577] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 11:25:17.583691 1732084 system_pods.go:105] "csi-hostpath-resizer-0" [6e4e3ef7-024e-4192-8273-bb196902ecbc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 11:25:17.583704 1732084 system_pods.go:105] "csi-hostpathplugin-dkmxd" [c9af4923-d14a-41e3-912e-f43e52a0ff79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 11:25:17.583711 1732084 system_pods.go:105] "etcd-addons-010792" [db953eb5-dadf-4054-b695-cd51eb337c5d] Running
	I0127 11:25:17.583718 1732084 system_pods.go:105] "kube-apiserver-addons-010792" [63e6a660-eb37-405c-b873-8d153773960b] Running
	I0127 11:25:17.583730 1732084 system_pods.go:105] "kube-controller-manager-addons-010792" [aa130798-d515-412b-8dce-1838c0e4b59b] Running
	I0127 11:25:17.583743 1732084 system_pods.go:105] "kube-ingress-dns-minikube" [5fef84bd-ed4e-4a26-9793-a9e515f5c005] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0127 11:25:17.583749 1732084 system_pods.go:105] "kube-proxy-657tw" [54a2b574-ac90-4b6a-b1cc-2ce30a926b4a] Running
	I0127 11:25:17.583757 1732084 system_pods.go:105] "kube-scheduler-addons-010792" [21fd202f-d702-4081-80f7-1595200ffb55] Running
	I0127 11:25:17.583770 1732084 system_pods.go:105] "metrics-server-7fbb699795-sqf8v" [df1191e5-c231-47c1-8b91-358cc72cdd03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:25:17.583784 1732084 system_pods.go:105] "nvidia-device-plugin-daemonset-sdq9s" [aae0f9ef-186d-454f-bead-016953edfdbe] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0127 11:25:17.583799 1732084 system_pods.go:105] "registry-6c88467877-2rdf7" [3d0c6731-428a-4f72-bdcf-d9af53b4e161] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 11:25:17.583808 1732084 system_pods.go:105] "registry-proxy-nr6jq" [ce9ee54b-9eed-45c9-897f-850f5632d1a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 11:25:17.583820 1732084 system_pods.go:105] "snapshot-controller-68b874b76f-g4pjn" [22425198-8f19-4758-9912-17482ac17e97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 11:25:17.583832 1732084 system_pods.go:105] "snapshot-controller-68b874b76f-xq57j" [0c32a36a-26ad-4480-a31a-cde35caa99ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 11:25:17.583846 1732084 system_pods.go:105] "storage-provisioner" [b027264b-8471-4994-9afc-0a96016c98f3] Running
	I0127 11:25:17.583859 1732084 system_pods.go:147] duration metric: took 3.514558168s to wait for k8s-apps to be running ...
	I0127 11:25:17.583869 1732084 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:25:17.583929 1732084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:25:17.601895 1732084 system_svc.go:56] duration metric: took 18.014302ms WaitForService to wait for kubelet
	I0127 11:25:17.601930 1732084 kubeadm.go:582] duration metric: took 13.59080294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:25:17.601951 1732084 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:25:17.616012 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:17.637598 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:17.637714 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:17.776865 1732084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:25:17.776915 1732084 node_conditions.go:123] node cpu capacity is 2
	I0127 11:25:17.776935 1732084 node_conditions.go:105] duration metric: took 174.978152ms to run NodePressure ...
	I0127 11:25:17.776952 1732084 start.go:241] waiting for startup goroutines ...
	I0127 11:25:17.778671 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:18.116400 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:18.135999 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:18.136258 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:18.279140 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:18.764907 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:18.765542 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:18.765887 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:18.777395 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:19.116594 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:19.136282 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:19.136431 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:19.279179 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:19.617201 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:19.636044 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:19.636474 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:19.779536 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:20.117053 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:20.137310 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:20.137332 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:20.277799 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:20.615786 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:20.636292 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:20.636818 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:20.778564 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:21.115181 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:21.135423 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:21.136129 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:21.282524 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:21.617189 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:21.635449 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:21.636191 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:21.778512 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:22.117188 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:22.136880 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:22.137457 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:22.279479 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:22.616902 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:22.636011 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:22.636448 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:22.778381 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:23.116650 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:23.135638 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:23.135958 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:23.278866 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:23.616009 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:23.636154 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:23.636690 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:23.777803 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:24.116944 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:24.135796 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:24.136509 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:24.278739 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:24.615489 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:24.635174 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:24.636350 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:24.777721 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:25.115595 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:25.136159 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:25.136331 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:25.365777 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:25.615488 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:25.635714 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:25.636132 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:25.778644 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:26.116052 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:26.135240 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:26.136034 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:26.278697 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:26.616827 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:26.635516 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:26.636730 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:26.778350 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:27.116159 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:27.135860 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:27.136569 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:27.279528 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:27.616501 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:27.636086 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:27.636667 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:27.778193 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:28.118494 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:28.136857 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:28.140709 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:28.279189 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:28.616318 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:28.636491 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:28.636602 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:28.778039 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:29.116776 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:29.135895 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:29.136518 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:29.278365 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:29.984044 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:29.985484 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:29.985608 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:30.088274 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:30.115787 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:30.136793 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:30.137655 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:30.278276 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:30.616430 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:30.636472 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:30.637167 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:30.778269 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:31.116206 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:31.136940 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:31.137110 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:31.279039 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:31.616173 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:31.635820 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:31.636818 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:31.778581 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:32.116328 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:32.136299 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:32.136813 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:32.280770 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:32.616947 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:32.636158 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:32.636523 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:32.778131 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:33.117045 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:33.135823 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:33.136311 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:33.278587 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:33.618384 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:33.650097 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:33.650739 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:33.777386 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:34.115388 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:34.136574 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:34.137046 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:34.277765 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:34.615403 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:34.636311 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:34.636443 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:34.778059 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:35.117000 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:35.136220 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:35.136559 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:35.277794 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:35.616183 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:35.636077 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:35.636367 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:35.779426 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:36.116843 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:36.136754 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:36.137537 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:36.278560 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:36.616457 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:36.636636 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:36.636830 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:36.778543 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:37.116795 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:37.135908 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:37.136378 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:37.278652 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:37.616862 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:37.634388 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:37.636445 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:37.777757 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:38.115904 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:38.136131 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:38.136655 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:38.278403 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:38.616979 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:38.635380 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:38.635561 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:38.777758 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:39.115799 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:39.138282 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:39.139119 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:39.278699 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:39.616305 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:39.636455 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:39.636524 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:39.778008 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:40.115533 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:40.135938 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:40.137139 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:40.279196 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:40.615736 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:40.636638 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:40.637019 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:40.778119 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:41.115889 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:41.137105 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:41.137306 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:41.278043 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:41.654352 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:41.654801 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:41.654865 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:41.778203 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:42.115778 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:42.137095 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:42.137584 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:42.279013 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:42.616548 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:42.636145 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:42.636400 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:42.778171 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:43.115817 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:43.136963 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:43.137138 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:43.278423 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:43.616233 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:43.635876 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:43.636165 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:43.777938 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:44.116806 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:44.136417 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:44.136847 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:44.278882 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:44.616162 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:44.635254 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:44.636760 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:44.778005 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:45.115503 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:45.136387 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:45.136715 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:45.278635 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:45.617575 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:45.635449 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:45.636139 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:45.778755 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:46.124862 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:46.222515 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:46.222858 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:46.277723 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:46.615746 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:46.635973 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:46.636373 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:46.777776 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:47.116687 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:47.135907 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:47.136354 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:47.278502 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:47.616675 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:47.636664 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:47.636835 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:47.778909 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:48.116385 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:48.135637 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:48.136662 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:48.278807 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:48.618554 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:48.636156 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:48.636437 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:48.778025 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:49.115889 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:49.135006 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:49.136840 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:49.277616 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:49.615413 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:49.636448 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:49.636698 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:49.778942 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:50.115617 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:50.136233 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:50.136545 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:50.278331 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:50.716638 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:50.716831 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:50.718247 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:25:50.777928 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:51.129872 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:51.142555 1732084 kapi.go:107] duration metric: took 39.010587174s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 11:25:51.143194 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:51.280135 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:51.616073 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:51.635399 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:51.777740 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:52.117265 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:52.136845 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:52.277849 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:52.616027 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:52.636511 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:52.777892 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:53.115276 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:53.135735 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:53.278013 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:53.615669 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:53.636342 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:53.779297 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:54.116968 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:54.136689 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:54.278052 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:54.616201 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:54.635519 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:54.778274 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:55.116651 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:55.136985 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:55.278989 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:55.615685 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:55.635825 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:55.778410 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:56.116188 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:56.136594 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:56.278731 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:56.625966 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:56.636747 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:56.777893 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:57.115712 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:57.135992 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:57.278142 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:57.616232 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:57.637306 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:57.778854 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:58.115830 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:58.135942 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:58.283994 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:58.616188 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:58.635951 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:58.779739 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:59.115759 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:59.137321 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:59.278329 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:25:59.617178 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:25:59.636584 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:25:59.777952 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:00.115629 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:00.135633 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:00.277888 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:00.615725 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:00.636511 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:00.777851 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:01.117477 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:01.138683 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:01.278357 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:01.616656 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:01.635902 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:01.807019 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:02.116011 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:02.136890 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:02.278402 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:02.616303 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:02.636154 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:02.779088 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:03.116208 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:03.135929 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:03.278481 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:03.616928 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:03.636401 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:03.778506 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:04.116388 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:04.135668 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:04.278833 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:04.616417 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:04.635919 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:04.778241 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:05.116686 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:05.135473 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:05.278897 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:05.615492 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:05.635744 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:05.777999 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:06.115666 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:06.136302 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:06.284206 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:06.616184 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:06.635556 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:06.777713 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:07.115675 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:07.136451 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:07.277917 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:07.618059 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:07.636891 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:07.778469 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:08.116786 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:08.135565 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:08.287530 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:08.620694 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:08.638211 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:08.779005 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:09.115984 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:09.136021 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:09.278894 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:09.617740 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:09.636656 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:09.778162 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:10.117632 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:10.136597 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:10.278117 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:10.617899 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:10.636669 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:10.778131 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:11.117232 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:11.136776 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:11.751451 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:11.754571 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:11.754899 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:11.777642 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:12.115999 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:12.136392 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:12.277824 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:12.615284 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:12.716839 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:12.777696 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:13.115440 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:13.135581 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:13.278173 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:13.616213 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:13.636189 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:13.778110 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:14.116173 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:14.135461 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:14.279778 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:14.616091 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:14.636558 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:14.777656 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:15.116000 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:15.136510 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:15.280761 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:15.883449 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:15.884043 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:15.884451 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:16.116076 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:16.135808 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:16.278325 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:16.617037 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:16.636875 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:16.778407 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:17.116699 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:17.136308 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:17.278037 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:17.615812 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:17.636743 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:17.778680 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:18.116605 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:18.135990 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:18.278354 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:18.617736 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:18.636518 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:18.778988 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:19.116485 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:19.136789 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:19.279316 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:19.617715 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:19.637240 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:19.778688 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:20.116071 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:20.136338 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:20.280565 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:20.615262 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:20.635645 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:20.777964 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:21.115990 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:21.136870 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:21.278501 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:21.616530 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:21.635906 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:21.778240 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:22.401887 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:22.403654 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:22.404361 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:22.616128 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:22.635643 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:22.778285 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:23.116537 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:23.135816 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:23.280849 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:23.615607 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:23.636550 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:23.779247 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:24.115789 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:24.136795 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:24.279900 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:24.616141 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:24.635380 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:24.778860 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:25.119010 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:25.136123 1732084 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:26:25.279380 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:25.621936 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:25.637394 1732084 kapi.go:107] duration metric: took 1m13.505191785s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 11:26:25.777372 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:26.116520 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:26.278111 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:26.655448 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:26.777981 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:27.117127 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:27.282467 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:27.619102 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:27.778494 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:28.116481 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:28.278489 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:28.616889 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:28.778142 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:29.116189 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:29.277992 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:29.616075 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:29.778391 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:30.116899 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:30.280425 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:26:30.617628 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:30.779349 1732084 kapi.go:107] duration metric: took 1m16.004619817s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 11:26:30.780808 1732084 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-010792 cluster.
	I0127 11:26:30.781947 1732084 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 11:26:30.782942 1732084 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 11:26:31.116861 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:31.616313 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:32.325994 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:32.616476 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:33.116272 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:33.616375 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:34.116369 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:34.615948 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:35.116732 1732084 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:26:35.616694 1732084 kapi.go:107] duration metric: took 1m22.505162809s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 11:26:35.618509 1732084 out.go:177] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, ingress-dns, metrics-server, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0127 11:26:35.619794 1732084 addons.go:514] duration metric: took 1m31.608665027s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher ingress-dns metrics-server nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0127 11:26:35.619842 1732084 start.go:246] waiting for cluster config update ...
	I0127 11:26:35.619869 1732084 start.go:255] writing updated cluster config ...
	I0127 11:26:35.620196 1732084 ssh_runner.go:195] Run: rm -f paused
	I0127 11:26:35.674263 1732084 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:26:35.675980 1732084 out.go:177] * Done! kubectl is now configured to use "addons-010792" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.296173193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977392296149964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ebef0d8-da92-4c21-b2a2-dfa53640eddf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.296794560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a7e63d-13f8-48fe-a15f-f648a3b4f3f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.296855975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a7e63d-13f8-48fe-a15f-f648a3b4f3f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.297181403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59f0c5ee42cda5ab747be14d1cd0e7b9588c1a795f10253f837414cdf1de0d0e,PodSandboxId:110ed8f4dad9d408304092be15e29d02bd7d31e2d09fa46b59490ae059c7d810,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737977251988350879,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bdca297a-a0ac-4017-9eda-1326d1b0a09d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce79a4b02d35cf290043aa80202f64b5ead4515482764ece6cca55f236afdad,PodSandboxId:16c3fc0f79af3adb62b2d434d8d84070e591aecd14f193430a9319b87b788a31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737977199756743645,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f71e8580-dbab-4556-a5d9-8525eb0f75d4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885b33145cbcaf2e6250725f7369371bf698150a810c0be1a74d3e1cb7868d6f,PodSandboxId:6b94b00da6b47e974dbc87567687e9151270034909789ae20eb925ea58e1aff2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737977185007427844,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-l2gvn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8dabf7a-bcf2-46c6-be52-e3231977ff7e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5b27a30a1dde249bfa0eb2ae51082778ed723b3229106fe3e1807299430c3220,PodSandboxId:d83fd9a245177c294cbccb959cf3610a9699991d84fb02c5b0286efe18ecb25f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172526636429,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s4t59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0db19b5e-e643-4d86-aacd-961848f506aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8020bd72f7328897eee7e92021fb6ce50dd884196388d9c506acf0392fe4018b,PodSandboxId:f0ffcf59508c0da7d64ba39f246ed579efd23c900f32f892b323a7f29ed5a46e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172382539726,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-prjgq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a04cab60-da29-4c86-9466-a9180b7aacbc,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48403d3a143b17397a22cff52d017bba9ea2789a7e297353bed3e9f21925e5d,PodSandboxId:f7a7f02a355f3120ca33c88572f186b77c3a82e24a1ee4452d8a52961d155e8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737977122261658366,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lt7hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60442dee-924f-45d7-b51b-92bb1a51d828,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f19a2899c94131e920a7e2b3e404454abd94465c7468b6d1ba567ae10a1cd6,PodSandboxId:1c91a6cbfe54ea4ae5ca1bbec05f165098442d416ba803b0f0c2f2cb447abdc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737977119632717337,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fef84bd-ed4e-4a26-9793-a9e515f5c005,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf3ecc1d083344af148765f624cf449fef0a9324f374bcd382187e3e24c1fa2,PodSandboxId:3a10c64a2b0d3d65a865ad8f3f2e7455865f6507995ead3d01531e410e5fdadc,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977110346021396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b027264b-8471-4994-9afc-0a96016c98f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c553684d4c8a901b4360b5ce44232d78b93bd677ae77391cdc2cd4250015829,PodSandboxId:41f59d0173d8b902d818778affffb5894ce1fb7d93730f7bbd1c7856fa94ec3b,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737977108098641735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dvzt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8159fb-e4e1-401a-a9fb-6d42bc4d838a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c24814db933f51e041ee883139ff306c3c0ea942719b79e9b84ea8d4f8a541e8,PodSandboxId:9a11435a0bb621a0d749fcd99afb21f477dfe27a2dfac946a9d328bb654e5eff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737977105219828700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-657tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2b574-ac90-4b6a-b1cc-2ce30a926b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcfbc5d1fbd82071bc3fe6a2a6d
e1de837577baf8c11b1db2d14115c136c1d6,PodSandboxId:94b4287e03a0cbbe0c403645ad8b93dc18577226dde20a74811a9e38b5bb9bba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737977094542334854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2016313039499031af07c71cd8e4f9,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4826fc0746d2da4a62ba03c0bf71aee37da2091b243bbdb9e2284f382b0aa892,PodSandbox
Id:912644c0f1fd553dea0c66aa5d3caf2e7fa084ed1820f17484b19d64cd884c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737977094529903601,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054b3bdf47f4f6c7f164c19ea58cc8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394bd5ae1c014f9aa6785633024e32b0b257a9c9b4b7cbf5de6880ff47bcbb48,PodSandboxId:cfbf7b44d2cd6f
364ec72b2bfa2af567653e3bb1d06abdee14d9f12152ea44f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737977094527505126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525b9574d5b5378001d4ad1e2fbd6e6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea697624dd369fbc1dca0e39fa8b0744bbf8661a04c3ee6294bb97320c9fbff,PodSandboxId:4ea9b741f4fb961f7b0096a2a42da03
bae0cf9496158e50a9efdc4b6b9f94cca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737977094382722503,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86bccba22185355a46fed2028601d22e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a7e63d-13f8-48fe-a15f-f648a3b4f3f7 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.338844989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f59d8dc4-8a9a-4d07-b86d-1e8fda9f17b9 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.338915288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f59d8dc4-8a9a-4d07-b86d-1e8fda9f17b9 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.339891865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=630ccddf-e45f-4112-be6f-57e2cacf14dd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.341727506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977392341656182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=630ccddf-e45f-4112-be6f-57e2cacf14dd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.342397578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8624214c-75ea-4310-8bf8-97052acfad59 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.342450298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8624214c-75ea-4310-8bf8-97052acfad59 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.342735592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59f0c5ee42cda5ab747be14d1cd0e7b9588c1a795f10253f837414cdf1de0d0e,PodSandboxId:110ed8f4dad9d408304092be15e29d02bd7d31e2d09fa46b59490ae059c7d810,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737977251988350879,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bdca297a-a0ac-4017-9eda-1326d1b0a09d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce79a4b02d35cf290043aa80202f64b5ead4515482764ece6cca55f236afdad,PodSandboxId:16c3fc0f79af3adb62b2d434d8d84070e591aecd14f193430a9319b87b788a31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737977199756743645,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f71e8580-dbab-4556-a5d9-8525eb0f75d4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885b33145cbcaf2e6250725f7369371bf698150a810c0be1a74d3e1cb7868d6f,PodSandboxId:6b94b00da6b47e974dbc87567687e9151270034909789ae20eb925ea58e1aff2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737977185007427844,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-l2gvn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8dabf7a-bcf2-46c6-be52-e3231977ff7e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5b27a30a1dde249bfa0eb2ae51082778ed723b3229106fe3e1807299430c3220,PodSandboxId:d83fd9a245177c294cbccb959cf3610a9699991d84fb02c5b0286efe18ecb25f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172526636429,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s4t59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0db19b5e-e643-4d86-aacd-961848f506aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8020bd72f7328897eee7e92021fb6ce50dd884196388d9c506acf0392fe4018b,PodSandboxId:f0ffcf59508c0da7d64ba39f246ed579efd23c900f32f892b323a7f29ed5a46e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172382539726,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-prjgq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a04cab60-da29-4c86-9466-a9180b7aacbc,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48403d3a143b17397a22cff52d017bba9ea2789a7e297353bed3e9f21925e5d,PodSandboxId:f7a7f02a355f3120ca33c88572f186b77c3a82e24a1ee4452d8a52961d155e8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737977122261658366,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lt7hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60442dee-924f-45d7-b51b-92bb1a51d828,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f19a2899c94131e920a7e2b3e404454abd94465c7468b6d1ba567ae10a1cd6,PodSandboxId:1c91a6cbfe54ea4ae5ca1bbec05f165098442d416ba803b0f0c2f2cb447abdc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737977119632717337,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fef84bd-ed4e-4a26-9793-a9e515f5c005,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf3ecc1d083344af148765f624cf449fef0a9324f374bcd382187e3e24c1fa2,PodSandboxId:3a10c64a2b0d3d65a865ad8f3f2e7455865f6507995ead3d01531e410e5fdadc,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977110346021396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b027264b-8471-4994-9afc-0a96016c98f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c553684d4c8a901b4360b5ce44232d78b93bd677ae77391cdc2cd4250015829,PodSandboxId:41f59d0173d8b902d818778affffb5894ce1fb7d93730f7bbd1c7856fa94ec3b,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737977108098641735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dvzt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8159fb-e4e1-401a-a9fb-6d42bc4d838a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c24814db933f51e041ee883139ff306c3c0ea942719b79e9b84ea8d4f8a541e8,PodSandboxId:9a11435a0bb621a0d749fcd99afb21f477dfe27a2dfac946a9d328bb654e5eff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737977105219828700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-657tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2b574-ac90-4b6a-b1cc-2ce30a926b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcfbc5d1fbd82071bc3fe6a2a6d
e1de837577baf8c11b1db2d14115c136c1d6,PodSandboxId:94b4287e03a0cbbe0c403645ad8b93dc18577226dde20a74811a9e38b5bb9bba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737977094542334854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2016313039499031af07c71cd8e4f9,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4826fc0746d2da4a62ba03c0bf71aee37da2091b243bbdb9e2284f382b0aa892,PodSandbox
Id:912644c0f1fd553dea0c66aa5d3caf2e7fa084ed1820f17484b19d64cd884c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737977094529903601,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054b3bdf47f4f6c7f164c19ea58cc8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394bd5ae1c014f9aa6785633024e32b0b257a9c9b4b7cbf5de6880ff47bcbb48,PodSandboxId:cfbf7b44d2cd6f
364ec72b2bfa2af567653e3bb1d06abdee14d9f12152ea44f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737977094527505126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525b9574d5b5378001d4ad1e2fbd6e6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea697624dd369fbc1dca0e39fa8b0744bbf8661a04c3ee6294bb97320c9fbff,PodSandboxId:4ea9b741f4fb961f7b0096a2a42da03
bae0cf9496158e50a9efdc4b6b9f94cca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737977094382722503,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86bccba22185355a46fed2028601d22e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8624214c-75ea-4310-8bf8-97052acfad59 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.374424878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87e5cf84-281c-4de6-a2dc-6a6ab8ee03f5 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.374513919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87e5cf84-281c-4de6-a2dc-6a6ab8ee03f5 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.375624995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd0e91e3-cf23-472c-adc9-645e3be7e0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.376932572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977392376905817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd0e91e3-cf23-472c-adc9-645e3be7e0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.377547629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3f9d81a-da58-4471-83b5-729df0495680 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.377663420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3f9d81a-da58-4471-83b5-729df0495680 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.377970523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59f0c5ee42cda5ab747be14d1cd0e7b9588c1a795f10253f837414cdf1de0d0e,PodSandboxId:110ed8f4dad9d408304092be15e29d02bd7d31e2d09fa46b59490ae059c7d810,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737977251988350879,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bdca297a-a0ac-4017-9eda-1326d1b0a09d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce79a4b02d35cf290043aa80202f64b5ead4515482764ece6cca55f236afdad,PodSandboxId:16c3fc0f79af3adb62b2d434d8d84070e591aecd14f193430a9319b87b788a31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737977199756743645,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f71e8580-dbab-4556-a5d9-8525eb0f75d4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885b33145cbcaf2e6250725f7369371bf698150a810c0be1a74d3e1cb7868d6f,PodSandboxId:6b94b00da6b47e974dbc87567687e9151270034909789ae20eb925ea58e1aff2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737977185007427844,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-l2gvn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8dabf7a-bcf2-46c6-be52-e3231977ff7e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5b27a30a1dde249bfa0eb2ae51082778ed723b3229106fe3e1807299430c3220,PodSandboxId:d83fd9a245177c294cbccb959cf3610a9699991d84fb02c5b0286efe18ecb25f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172526636429,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s4t59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0db19b5e-e643-4d86-aacd-961848f506aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8020bd72f7328897eee7e92021fb6ce50dd884196388d9c506acf0392fe4018b,PodSandboxId:f0ffcf59508c0da7d64ba39f246ed579efd23c900f32f892b323a7f29ed5a46e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172382539726,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-prjgq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a04cab60-da29-4c86-9466-a9180b7aacbc,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48403d3a143b17397a22cff52d017bba9ea2789a7e297353bed3e9f21925e5d,PodSandboxId:f7a7f02a355f3120ca33c88572f186b77c3a82e24a1ee4452d8a52961d155e8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737977122261658366,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lt7hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60442dee-924f-45d7-b51b-92bb1a51d828,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f19a2899c94131e920a7e2b3e404454abd94465c7468b6d1ba567ae10a1cd6,PodSandboxId:1c91a6cbfe54ea4ae5ca1bbec05f165098442d416ba803b0f0c2f2cb447abdc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737977119632717337,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fef84bd-ed4e-4a26-9793-a9e515f5c005,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf3ecc1d083344af148765f624cf449fef0a9324f374bcd382187e3e24c1fa2,PodSandboxId:3a10c64a2b0d3d65a865ad8f3f2e7455865f6507995ead3d01531e410e5fdadc,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977110346021396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b027264b-8471-4994-9afc-0a96016c98f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c553684d4c8a901b4360b5ce44232d78b93bd677ae77391cdc2cd4250015829,PodSandboxId:41f59d0173d8b902d818778affffb5894ce1fb7d93730f7bbd1c7856fa94ec3b,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737977108098641735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dvzt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8159fb-e4e1-401a-a9fb-6d42bc4d838a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c24814db933f51e041ee883139ff306c3c0ea942719b79e9b84ea8d4f8a541e8,PodSandboxId:9a11435a0bb621a0d749fcd99afb21f477dfe27a2dfac946a9d328bb654e5eff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737977105219828700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-657tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2b574-ac90-4b6a-b1cc-2ce30a926b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcfbc5d1fbd82071bc3fe6a2a6d
e1de837577baf8c11b1db2d14115c136c1d6,PodSandboxId:94b4287e03a0cbbe0c403645ad8b93dc18577226dde20a74811a9e38b5bb9bba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737977094542334854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2016313039499031af07c71cd8e4f9,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4826fc0746d2da4a62ba03c0bf71aee37da2091b243bbdb9e2284f382b0aa892,PodSandbox
Id:912644c0f1fd553dea0c66aa5d3caf2e7fa084ed1820f17484b19d64cd884c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737977094529903601,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054b3bdf47f4f6c7f164c19ea58cc8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394bd5ae1c014f9aa6785633024e32b0b257a9c9b4b7cbf5de6880ff47bcbb48,PodSandboxId:cfbf7b44d2cd6f
364ec72b2bfa2af567653e3bb1d06abdee14d9f12152ea44f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737977094527505126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525b9574d5b5378001d4ad1e2fbd6e6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea697624dd369fbc1dca0e39fa8b0744bbf8661a04c3ee6294bb97320c9fbff,PodSandboxId:4ea9b741f4fb961f7b0096a2a42da03
bae0cf9496158e50a9efdc4b6b9f94cca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737977094382722503,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86bccba22185355a46fed2028601d22e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3f9d81a-da58-4471-83b5-729df0495680 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.408017959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f24f503b-a4fe-40a9-bba4-5315ebe3a351 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.408241143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f24f503b-a4fe-40a9-bba4-5315ebe3a351 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.409510788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe578829-2829-49d8-ba44-c24317dea600 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.410748878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977392410723745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe578829-2829-49d8-ba44-c24317dea600 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.411402707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b50501e-cc34-49bf-bb04-4d512211253f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.411456292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b50501e-cc34-49bf-bb04-4d512211253f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:52 addons-010792 crio[663]: time="2025-01-27 11:29:52.411746505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59f0c5ee42cda5ab747be14d1cd0e7b9588c1a795f10253f837414cdf1de0d0e,PodSandboxId:110ed8f4dad9d408304092be15e29d02bd7d31e2d09fa46b59490ae059c7d810,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737977251988350879,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bdca297a-a0ac-4017-9eda-1326d1b0a09d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce79a4b02d35cf290043aa80202f64b5ead4515482764ece6cca55f236afdad,PodSandboxId:16c3fc0f79af3adb62b2d434d8d84070e591aecd14f193430a9319b87b788a31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737977199756743645,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f71e8580-dbab-4556-a5d9-8525eb0f75d4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885b33145cbcaf2e6250725f7369371bf698150a810c0be1a74d3e1cb7868d6f,PodSandboxId:6b94b00da6b47e974dbc87567687e9151270034909789ae20eb925ea58e1aff2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737977185007427844,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-l2gvn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8dabf7a-bcf2-46c6-be52-e3231977ff7e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5b27a30a1dde249bfa0eb2ae51082778ed723b3229106fe3e1807299430c3220,PodSandboxId:d83fd9a245177c294cbccb959cf3610a9699991d84fb02c5b0286efe18ecb25f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172526636429,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s4t59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0db19b5e-e643-4d86-aacd-961848f506aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8020bd72f7328897eee7e92021fb6ce50dd884196388d9c506acf0392fe4018b,PodSandboxId:f0ffcf59508c0da7d64ba39f246ed579efd23c900f32f892b323a7f29ed5a46e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737977172382539726,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-prjgq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a04cab60-da29-4c86-9466-a9180b7aacbc,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48403d3a143b17397a22cff52d017bba9ea2789a7e297353bed3e9f21925e5d,PodSandboxId:f7a7f02a355f3120ca33c88572f186b77c3a82e24a1ee4452d8a52961d155e8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737977122261658366,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lt7hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60442dee-924f-45d7-b51b-92bb1a51d828,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f19a2899c94131e920a7e2b3e404454abd94465c7468b6d1ba567ae10a1cd6,PodSandboxId:1c91a6cbfe54ea4ae5ca1bbec05f165098442d416ba803b0f0c2f2cb447abdc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737977119632717337,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fef84bd-ed4e-4a26-9793-a9e515f5c005,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf3ecc1d083344af148765f624cf449fef0a9324f374bcd382187e3e24c1fa2,PodSandboxId:3a10c64a2b0d3d65a865ad8f3f2e7455865f6507995ead3d01531e410e5fdadc,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977110346021396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b027264b-8471-4994-9afc-0a96016c98f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c553684d4c8a901b4360b5ce44232d78b93bd677ae77391cdc2cd4250015829,PodSandboxId:41f59d0173d8b902d818778affffb5894ce1fb7d93730f7bbd1c7856fa94ec3b,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737977108098641735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dvzt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8159fb-e4e1-401a-a9fb-6d42bc4d838a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c24814db933f51e041ee883139ff306c3c0ea942719b79e9b84ea8d4f8a541e8,PodSandboxId:9a11435a0bb621a0d749fcd99afb21f477dfe27a2dfac946a9d328bb654e5eff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737977105219828700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-657tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2b574-ac90-4b6a-b1cc-2ce30a926b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcfbc5d1fbd82071bc3fe6a2a6d
e1de837577baf8c11b1db2d14115c136c1d6,PodSandboxId:94b4287e03a0cbbe0c403645ad8b93dc18577226dde20a74811a9e38b5bb9bba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737977094542334854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2016313039499031af07c71cd8e4f9,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4826fc0746d2da4a62ba03c0bf71aee37da2091b243bbdb9e2284f382b0aa892,PodSandbox
Id:912644c0f1fd553dea0c66aa5d3caf2e7fa084ed1820f17484b19d64cd884c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737977094529903601,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054b3bdf47f4f6c7f164c19ea58cc8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394bd5ae1c014f9aa6785633024e32b0b257a9c9b4b7cbf5de6880ff47bcbb48,PodSandboxId:cfbf7b44d2cd6f
364ec72b2bfa2af567653e3bb1d06abdee14d9f12152ea44f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737977094527505126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525b9574d5b5378001d4ad1e2fbd6e6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea697624dd369fbc1dca0e39fa8b0744bbf8661a04c3ee6294bb97320c9fbff,PodSandboxId:4ea9b741f4fb961f7b0096a2a42da03
bae0cf9496158e50a9efdc4b6b9f94cca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737977094382722503,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-010792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86bccba22185355a46fed2028601d22e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b50501e-cc34-49bf-bb04-4d512211253f name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	59f0c5ee42cda       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   110ed8f4dad9d       nginx
	dce79a4b02d35       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   16c3fc0f79af3       busybox
	885b33145cbca       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   6b94b00da6b47       ingress-nginx-controller-56d7c84fd4-l2gvn
	5b27a30a1dde2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   d83fd9a245177       ingress-nginx-admission-patch-s4t59
	8020bd72f7328       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   f0ffcf59508c0       ingress-nginx-admission-create-prjgq
	c48403d3a143b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f7a7f02a355f3       amd-gpu-device-plugin-lt7hj
	93f19a2899c94       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   1c91a6cbfe54e       kube-ingress-dns-minikube
	0cf3ecc1d0833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   3a10c64a2b0d3       storage-provisioner
	5c553684d4c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   41f59d0173d8b       coredns-668d6bf9bc-dvzt2
	c24814db933f5       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   9a11435a0bb62       kube-proxy-657tw
	cdcfbc5d1fbd8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   94b4287e03a0c       etcd-addons-010792
	4826fc0746d2d       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   912644c0f1fd5       kube-scheduler-addons-010792
	394bd5ae1c014       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   cfbf7b44d2cd6       kube-apiserver-addons-010792
	1ea697624dd36       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   4ea9b741f4fb9       kube-controller-manager-addons-010792
	
	
	==> coredns [5c553684d4c8a901b4360b5ce44232d78b93bd677ae77391cdc2cd4250015829] <==
	[INFO] 10.244.0.7:50305 - 51404 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000270502s
	[INFO] 10.244.0.7:50305 - 29704 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001852465s
	[INFO] 10.244.0.7:50305 - 58623 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000156411s
	[INFO] 10.244.0.7:50305 - 4942 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000125351s
	[INFO] 10.244.0.7:50305 - 58009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000114038s
	[INFO] 10.244.0.7:50305 - 45742 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109282s
	[INFO] 10.244.0.7:50305 - 32704 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000116768s
	[INFO] 10.244.0.7:40829 - 55674 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096495s
	[INFO] 10.244.0.7:40829 - 56056 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000132355s
	[INFO] 10.244.0.7:60854 - 9810 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000194988s
	[INFO] 10.244.0.7:60854 - 10057 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103206s
	[INFO] 10.244.0.7:32802 - 11393 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112287s
	[INFO] 10.244.0.7:32802 - 11649 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000385243s
	[INFO] 10.244.0.7:49389 - 63653 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093861s
	[INFO] 10.244.0.7:49389 - 63445 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107978s
	[INFO] 10.244.0.23:36809 - 18783 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000428491s
	[INFO] 10.244.0.23:37121 - 49057 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000126206s
	[INFO] 10.244.0.23:35646 - 6744 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103632s
	[INFO] 10.244.0.23:51412 - 24878 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088565s
	[INFO] 10.244.0.23:45071 - 64177 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008166s
	[INFO] 10.244.0.23:60175 - 6347 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063311s
	[INFO] 10.244.0.23:51371 - 56549 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001383056s
	[INFO] 10.244.0.23:59445 - 56939 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001806709s
	[INFO] 10.244.0.28:36917 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000261662s
	[INFO] 10.244.0.28:52306 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165939s
	
	
	==> describe nodes <==
	Name:               addons-010792
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-010792
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=addons-010792
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_24_59_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-010792
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:24:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-010792
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:29:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:28:03 +0000   Mon, 27 Jan 2025 11:24:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:28:03 +0000   Mon, 27 Jan 2025 11:24:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:28:03 +0000   Mon, 27 Jan 2025 11:24:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:28:03 +0000   Mon, 27 Jan 2025 11:24:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    addons-010792
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ec9f65910b341c39c97bc60faa4c260
	  System UUID:                6ec9f659-10b3-41c3-9c97-bc60faa4c260
	  Boot ID:                    a62ecf9e-e76c-400a-8439-20628f6f00d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     hello-world-app-7d9564db4-8x7vl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-l2gvn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m41s
	  kube-system                 amd-gpu-device-plugin-lt7hj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 coredns-668d6bf9bc-dvzt2                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m48s
	  kube-system                 etcd-addons-010792                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m55s
	  kube-system                 kube-apiserver-addons-010792                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-addons-010792        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-proxy-657tw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-scheduler-addons-010792                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m46s  kube-proxy       
	  Normal  Starting                 4m53s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m53s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m53s  kubelet          Node addons-010792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s  kubelet          Node addons-010792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s  kubelet          Node addons-010792 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m53s  kubelet          Node addons-010792 status is now: NodeReady
	  Normal  RegisteredNode           4m49s  node-controller  Node addons-010792 event: Registered Node addons-010792 in Controller
	
	
	==> dmesg <==
	[  +0.077328] kauditd_printk_skb: 69 callbacks suppressed
	[Jan27 11:25] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	[  +0.173564] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.012983] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.114533] kauditd_printk_skb: 166 callbacks suppressed
	[  +5.253832] kauditd_printk_skb: 47 callbacks suppressed
	[ +31.383879] kauditd_printk_skb: 4 callbacks suppressed
	[Jan27 11:26] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.976172] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.039426] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.539952] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.105244] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.261609] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.294667] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.108005] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.840167] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 11:27] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.028359] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.005838] kauditd_printk_skb: 44 callbacks suppressed
	[  +7.736090] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.949786] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.270931] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.036800] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.669661] kauditd_printk_skb: 3 callbacks suppressed
	[Jan27 11:29] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [cdcfbc5d1fbd82071bc3fe6a2a6de1de837577baf8c11b1db2d14115c136c1d6] <==
	{"level":"warn","ts":"2025-01-27T11:26:32.309882Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:26:31.926772Z","time spent":"383.019782ms","remote":"127.0.0.1:53366","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-010792\" mod_revision:1057 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-010792\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-010792\" > >"}
	{"level":"warn","ts":"2025-01-27T11:26:32.310016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.79304ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:26:32.310036Z","caller":"traceutil/trace.go:171","msg":"trace[1144463095] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1120; }","duration":"228.816311ms","start":"2025-01-27T11:26:32.081214Z","end":"2025-01-27T11:26:32.310030Z","steps":["trace[1144463095] 'agreement among raft nodes before linearized reading'  (duration: 228.776595ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:26:32.310248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.028111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T11:26:32.310521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.910494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-01-27T11:26:32.310544Z","caller":"traceutil/trace.go:171","msg":"trace[1651777374] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1120; }","duration":"109.953461ms","start":"2025-01-27T11:26:32.200583Z","end":"2025-01-27T11:26:32.310536Z","steps":["trace[1651777374] 'agreement among raft nodes before linearized reading'  (duration: 109.875792ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:26:32.310721Z","caller":"traceutil/trace.go:171","msg":"trace[774291885] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1120; }","duration":"209.089669ms","start":"2025-01-27T11:26:32.101173Z","end":"2025-01-27T11:26:32.310263Z","steps":["trace[774291885] 'agreement among raft nodes before linearized reading'  (duration: 209.033946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:02.211975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.549951ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:27:02.212129Z","caller":"traceutil/trace.go:171","msg":"trace[464066508] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1302; }","duration":"130.716186ms","start":"2025-01-27T11:27:02.081399Z","end":"2025-01-27T11:27:02.212116Z","steps":["trace[464066508] 'range keys from in-memory index tree'  (duration: 130.484253ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:27:25.021691Z","caller":"traceutil/trace.go:171","msg":"trace[2096804764] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"376.701568ms","start":"2025-01-27T11:27:24.644963Z","end":"2025-01-27T11:27:25.021664Z","steps":["trace[2096804764] 'process raft request'  (duration: 376.409268ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:25.021832Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:27:24.644943Z","time spent":"376.801024ms","remote":"127.0.0.1:53246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1529 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T11:27:26.866021Z","caller":"traceutil/trace.go:171","msg":"trace[402059057] transaction","detail":"{read_only:false; response_revision:1548; number_of_response:1; }","duration":"322.288078ms","start":"2025-01-27T11:27:26.543719Z","end":"2025-01-27T11:27:26.866007Z","steps":["trace[402059057] 'process raft request'  (duration: 322.146523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:26.866167Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:27:26.543707Z","time spent":"322.402694ms","remote":"127.0.0.1:53396","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":591,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/ingress/default/nginx-ingress\" mod_revision:1512 > success:<request_put:<key:\"/registry/ingress/default/nginx-ingress\" value_size:544 >> failure:<request_range:<key:\"/registry/ingress/default/nginx-ingress\" > >"}
	{"level":"info","ts":"2025-01-27T11:27:26.866195Z","caller":"traceutil/trace.go:171","msg":"trace[1905870449] linearizableReadLoop","detail":"{readStateIndex:1601; appliedIndex:1601; }","duration":"314.406847ms","start":"2025-01-27T11:27:26.551774Z","end":"2025-01-27T11:27:26.866180Z","steps":["trace[1905870449] 'read index received'  (duration: 314.400441ms)","trace[1905870449] 'applied index is now lower than readState.Index'  (duration: 5.293µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:27:26.866358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.522892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:27:26.866379Z","caller":"traceutil/trace.go:171","msg":"trace[1106073508] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1548; }","duration":"314.621491ms","start":"2025-01-27T11:27:26.551752Z","end":"2025-01-27T11:27:26.866373Z","steps":["trace[1106073508] 'agreement among raft nodes before linearized reading'  (duration: 314.478118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:26.866396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:27:26.551740Z","time spent":"314.652724ms","remote":"127.0.0.1:53064","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-01-27T11:27:26.866845Z","caller":"traceutil/trace.go:171","msg":"trace[667900930] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"278.679518ms","start":"2025-01-27T11:27:26.588156Z","end":"2025-01-27T11:27:26.866835Z","steps":["trace[667900930] 'process raft request'  (duration: 278.613624ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:27:52.704567Z","caller":"traceutil/trace.go:171","msg":"trace[1643561714] linearizableReadLoop","detail":"{readStateIndex:1842; appliedIndex:1841; }","duration":"228.906438ms","start":"2025-01-27T11:27:52.475635Z","end":"2025-01-27T11:27:52.704542Z","steps":["trace[1643561714] 'read index received'  (duration: 228.763762ms)","trace[1643561714] 'applied index is now lower than readState.Index'  (duration: 142.019µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T11:27:52.704580Z","caller":"traceutil/trace.go:171","msg":"trace[961704465] transaction","detail":"{read_only:false; response_revision:1781; number_of_response:1; }","duration":"306.332572ms","start":"2025-01-27T11:27:52.398231Z","end":"2025-01-27T11:27:52.704564Z","steps":["trace[961704465] 'process raft request'  (duration: 306.221208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:52.704972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:27:52.398215Z","time spent":"306.70773ms","remote":"127.0.0.1:53152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":746,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.181e891efce236f5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/busybox.181e891efce236f5\" value_size:679 lease:1859587143310164660 >> failure:<>"}
	{"level":"warn","ts":"2025-01-27T11:27:52.705186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.874938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:27:52.705222Z","caller":"traceutil/trace.go:171","msg":"trace[192505519] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1781; }","duration":"156.933829ms","start":"2025-01-27T11:27:52.548282Z","end":"2025-01-27T11:27:52.705216Z","steps":["trace[192505519] 'agreement among raft nodes before linearized reading'  (duration: 156.883013ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:27:52.704850Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.166264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-resizer\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:27:52.705327Z","caller":"traceutil/trace.go:171","msg":"trace[1021080160] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:0; response_revision:1781; }","duration":"229.722729ms","start":"2025-01-27T11:27:52.475595Z","end":"2025-01-27T11:27:52.705318Z","steps":["trace[1021080160] 'agreement among raft nodes before linearized reading'  (duration: 229.139989ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:29:52 up 5 min,  0 users,  load average: 0.41, 0.84, 0.46
	Linux addons-010792 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [394bd5ae1c014f9aa6785633024e32b0b257a9c9b4b7cbf5de6880ff47bcbb48] <==
	I0127 11:26:09.434043       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0127 11:26:47.433810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.45:8443->192.168.39.1:55648: use of closed network connection
	E0127 11:26:47.613289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.45:8443->192.168.39.1:55682: use of closed network connection
	I0127 11:26:56.976105       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.168.44"}
	I0127 11:27:19.964961       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 11:27:20.150241       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.153.22"}
	I0127 11:27:23.401270       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 11:27:24.535833       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0127 11:27:25.749998       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 11:27:34.074035       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0127 11:27:48.515436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:27:48.515501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:27:48.551042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:27:48.551231       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:27:48.619565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:27:48.620242       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:27:48.666729       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:27:48.666779       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:27:48.774433       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:27:48.775214       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 11:27:49.666868       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 11:27:49.774293       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0127 11:27:49.785700       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0127 11:28:10.351093       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 11:29:51.308873       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.147.199"}
	
	
	==> kube-controller-manager [1ea697624dd369fbc1dca0e39fa8b0744bbf8661a04c3ee6294bb97320c9fbff] <==
	E0127 11:29:00.137641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:29:13.437879       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:29:13.438926       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 11:29:13.439758       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:29:13.439944       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:29:15.649567       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:29:15.650523       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 11:29:15.651378       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:29:15.651443       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:29:30.812132       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:29:30.813165       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 11:29:30.813838       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:29:30.813904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:29:46.021564       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:29:46.022811       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 11:29:46.023749       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:29:46.023857       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:29:46.770859       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:29:46.771989       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 11:29:46.772892       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:29:46.772971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 11:29:51.117254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="32.366784ms"
	I0127 11:29:51.145256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="27.947617ms"
	I0127 11:29:51.161459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.123546ms"
	I0127 11:29:51.161640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="86.879µs"
	
	
	==> kube-proxy [c24814db933f51e041ee883139ff306c3c0ea942719b79e9b84ea8d4f8a541e8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:25:06.093279       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:25:06.102604       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.45"]
	E0127 11:25:06.102670       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:25:06.177028       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:25:06.177159       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:25:06.177183       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:25:06.188205       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:25:06.188499       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:25:06.188511       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:25:06.193196       1 config.go:199] "Starting service config controller"
	I0127 11:25:06.193235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:25:06.193260       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:25:06.193264       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:25:06.193393       1 config.go:329] "Starting node config controller"
	I0127 11:25:06.193446       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:25:06.293970       1 shared_informer.go:320] Caches are synced for node config
	I0127 11:25:06.294023       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:25:06.294035       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [4826fc0746d2da4a62ba03c0bf71aee37da2091b243bbdb9e2284f382b0aa892] <==
	W0127 11:24:56.809256       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:24:56.812088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.646200       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:24:57.646245       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 11:24:57.727376       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 11:24:57.727421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.752876       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:24:57.752958       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.797916       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:24:57.798134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.864915       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:24:57.865174       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.913229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:24:57.913275       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.932378       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:24:57.932467       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.943871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:24:57.943963       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.947130       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:24:57.947171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:57.973833       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:24:57.974163       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:24:58.036704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:24:58.036749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 11:25:00.396529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 11:29:09 addons-010792 kubelet[1222]: E0127 11:29:09.757707    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977349757347591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:09 addons-010792 kubelet[1222]: E0127 11:29:09.758109    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977349757347591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:18 addons-010792 kubelet[1222]: I0127 11:29:18.396140    1222 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 11:29:19 addons-010792 kubelet[1222]: E0127 11:29:19.761838    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977359761392804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:19 addons-010792 kubelet[1222]: E0127 11:29:19.761936    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977359761392804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:26 addons-010792 kubelet[1222]: I0127 11:29:26.395964    1222 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lt7hj" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 11:29:29 addons-010792 kubelet[1222]: E0127 11:29:29.764673    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977369764305802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:29 addons-010792 kubelet[1222]: E0127 11:29:29.764763    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977369764305802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:39 addons-010792 kubelet[1222]: E0127 11:29:39.767030    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977379766711951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:39 addons-010792 kubelet[1222]: E0127 11:29:39.767110    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977379766711951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:49 addons-010792 kubelet[1222]: E0127 11:29:49.769150    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977389768776707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:49 addons-010792 kubelet[1222]: E0127 11:29:49.769202    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977389768776707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.109923    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="6e4e3ef7-024e-4192-8273-bb196902ecbc" containerName="csi-resizer"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110015    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="node-driver-registrar"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110109    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="liveness-probe"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110117    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="41ef935a-663e-4d22-b6ad-4771fe5569a6" containerName="local-path-provisioner"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110123    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="0c32a36a-26ad-4480-a31a-cde35caa99ae" containerName="volume-snapshot-controller"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110129    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="csi-external-health-monitor-controller"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110135    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="hostpath"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110141    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="27ec2a13-e7ed-4282-85ee-13264d989ae2" containerName="task-pv-container"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110147    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="csi-snapshotter"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110153    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="c9af4923-d14a-41e3-912e-f43e52a0ff79" containerName="csi-provisioner"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110215    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="22425198-8f19-4758-9912-17482ac17e97" containerName="volume-snapshot-controller"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.110222    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="8604659e-d317-472a-9e10-99b36b480577" containerName="csi-attacher"
	Jan 27 11:29:51 addons-010792 kubelet[1222]: I0127 11:29:51.214049    1222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99nz\" (UniqueName: \"kubernetes.io/projected/b9898d6e-be34-4610-955a-d574bbb5c759-kube-api-access-x99nz\") pod \"hello-world-app-7d9564db4-8x7vl\" (UID: \"b9898d6e-be34-4610-955a-d574bbb5c759\") " pod="default/hello-world-app-7d9564db4-8x7vl"
	
	
	==> storage-provisioner [0cf3ecc1d083344af148765f624cf449fef0a9324f374bcd382187e3e24c1fa2] <==
	I0127 11:25:10.891324       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:25:10.959841       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:25:10.959908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:25:10.993645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:25:10.993868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-010792_c3f77c7f-85a8-48f0-9ab9-af48f395ccd9!
	I0127 11:25:11.008148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"055a8ba6-e03a-40cc-8502-91fe2ef0f1b6", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-010792_c3f77c7f-85a8-48f0-9ab9-af48f395ccd9 became leader
	I0127 11:25:11.095145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-010792_c3f77c7f-85a8-48f0-9ab9-af48f395ccd9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-010792 -n addons-010792
helpers_test.go:261: (dbg) Run:  kubectl --context addons-010792 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-8x7vl ingress-nginx-admission-create-prjgq ingress-nginx-admission-patch-s4t59
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-010792 describe pod hello-world-app-7d9564db4-8x7vl ingress-nginx-admission-create-prjgq ingress-nginx-admission-patch-s4t59
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-010792 describe pod hello-world-app-7d9564db4-8x7vl ingress-nginx-admission-create-prjgq ingress-nginx-admission-patch-s4t59: exit status 1 (67.152529ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-8x7vl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-010792/192.168.39.45
	Start Time:       Mon, 27 Jan 2025 11:29:51 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x99nz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x99nz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-8x7vl to addons-010792
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-prjgq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s4t59" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-010792 describe pod hello-world-app-7d9564db4-8x7vl ingress-nginx-admission-create-prjgq ingress-nginx-admission-patch-s4t59: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable ingress --alsologtostderr -v=1: (7.697657081s)
--- FAIL: TestAddons/parallel/Ingress (162.31s)

                                                
                                    
x
+
TestPreload (170.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-126856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-126856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.633074539s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-126856 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-126856 image pull gcr.io/k8s-minikube/busybox: (3.486806976s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-126856
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-126856: (6.617881262s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-126856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-126856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.607399818s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-126856 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 12:19:43.086711332 +0000 UTC m=+3357.867484873
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-126856 -n test-preload-126856
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-126856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-126856 logs -n 25: (1.095141789s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-589982 ssh -n                                                                 | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | multinode-589982-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-589982 ssh -n multinode-589982 sudo cat                                       | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | /home/docker/cp-test_multinode-589982-m03_multinode-589982.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-589982 cp multinode-589982-m03:/home/docker/cp-test.txt                       | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | multinode-589982-m02:/home/docker/cp-test_multinode-589982-m03_multinode-589982-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-589982 ssh -n                                                                 | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | multinode-589982-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-589982 ssh -n multinode-589982-m02 sudo cat                                   | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | /home/docker/cp-test_multinode-589982-m03_multinode-589982-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-589982 node stop m03                                                          | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	| node    | multinode-589982 node start                                                             | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-589982                                                                | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	| stop    | -p multinode-589982                                                                     | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:08 UTC |
	| start   | -p multinode-589982                                                                     | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:11 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-589982                                                                | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:11 UTC |                     |
	| node    | multinode-589982 node delete                                                            | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:11 UTC | 27 Jan 25 12:11 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-589982 stop                                                                   | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:11 UTC | 27 Jan 25 12:14 UTC |
	| start   | -p multinode-589982                                                                     | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:14 UTC | 27 Jan 25 12:16 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-589982                                                                | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC |                     |
	| start   | -p multinode-589982-m02                                                                 | multinode-589982-m02 | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-589982-m03                                                                 | multinode-589982-m03 | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-589982                                                                 | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC |                     |
	| delete  | -p multinode-589982-m03                                                                 | multinode-589982-m03 | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	| delete  | -p multinode-589982                                                                     | multinode-589982     | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	| start   | -p test-preload-126856                                                                  | test-preload-126856  | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:18 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-126856 image pull                                                          | test-preload-126856  | jenkins | v1.35.0 | 27 Jan 25 12:18 UTC | 27 Jan 25 12:18 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-126856                                                                  | test-preload-126856  | jenkins | v1.35.0 | 27 Jan 25 12:18 UTC | 27 Jan 25 12:18 UTC |
	| start   | -p test-preload-126856                                                                  | test-preload-126856  | jenkins | v1.35.0 | 27 Jan 25 12:18 UTC | 27 Jan 25 12:19 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-126856 image list                                                          | test-preload-126856  | jenkins | v1.35.0 | 27 Jan 25 12:19 UTC | 27 Jan 25 12:19 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:18:38
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:18:38.308403 1762328 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:18:38.308542 1762328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:18:38.308553 1762328 out.go:358] Setting ErrFile to fd 2...
	I0127 12:18:38.308558 1762328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:18:38.308742 1762328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:18:38.309318 1762328 out.go:352] Setting JSON to false
	I0127 12:18:38.310283 1762328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":32459,"bootTime":1737947859,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:18:38.310391 1762328 start.go:139] virtualization: kvm guest
	I0127 12:18:38.312389 1762328 out.go:177] * [test-preload-126856] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:18:38.313544 1762328 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:18:38.313547 1762328 notify.go:220] Checking for updates...
	I0127 12:18:38.314574 1762328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:18:38.315675 1762328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:18:38.316811 1762328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:18:38.317823 1762328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:18:38.318885 1762328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:18:38.320206 1762328 config.go:182] Loaded profile config "test-preload-126856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 12:18:38.320601 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:18:38.320650 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:18:38.335761 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0127 12:18:38.336185 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:18:38.336731 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:18:38.336795 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:18:38.337108 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:18:38.337308 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:18:38.338779 1762328 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 12:18:38.339802 1762328 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:18:38.340089 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:18:38.340140 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:18:38.354652 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0127 12:18:38.355145 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:18:38.355641 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:18:38.355673 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:18:38.355982 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:18:38.356157 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:18:38.390093 1762328 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:18:38.391174 1762328 start.go:297] selected driver: kvm2
	I0127 12:18:38.391186 1762328 start.go:901] validating driver "kvm2" against &{Name:test-preload-126856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-126856
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:18:38.391282 1762328 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:18:38.391980 1762328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:18:38.392057 1762328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:18:38.407376 1762328 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:18:38.407717 1762328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:18:38.407747 1762328 cni.go:84] Creating CNI manager for ""
	I0127 12:18:38.407792 1762328 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:18:38.407855 1762328 start.go:340] cluster config:
	{Name:test-preload-126856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-126856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:18:38.407952 1762328 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:18:38.409454 1762328 out.go:177] * Starting "test-preload-126856" primary control-plane node in "test-preload-126856" cluster
	I0127 12:18:38.410521 1762328 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 12:18:38.882583 1762328 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 12:18:38.882624 1762328 cache.go:56] Caching tarball of preloaded images
	I0127 12:18:38.882850 1762328 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 12:18:38.884655 1762328 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 12:18:38.885705 1762328 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 12:18:38.981650 1762328 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 12:18:49.592875 1762328 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 12:18:49.592989 1762328 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 12:18:50.455724 1762328 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 12:18:50.455881 1762328 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/config.json ...
	I0127 12:18:50.456146 1762328 start.go:360] acquireMachinesLock for test-preload-126856: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:18:50.456240 1762328 start.go:364] duration metric: took 68.473µs to acquireMachinesLock for "test-preload-126856"
	I0127 12:18:50.456263 1762328 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:18:50.456271 1762328 fix.go:54] fixHost starting: 
	I0127 12:18:50.456578 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:18:50.456628 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:18:50.471494 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0127 12:18:50.471960 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:18:50.472495 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:18:50.472526 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:18:50.472889 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:18:50.473120 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:18:50.473253 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetState
	I0127 12:18:50.474899 1762328 fix.go:112] recreateIfNeeded on test-preload-126856: state=Stopped err=<nil>
	I0127 12:18:50.474934 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	W0127 12:18:50.475066 1762328 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:18:50.477387 1762328 out.go:177] * Restarting existing kvm2 VM for "test-preload-126856" ...
	I0127 12:18:50.478525 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Start
	I0127 12:18:50.478724 1762328 main.go:141] libmachine: (test-preload-126856) starting domain...
	I0127 12:18:50.478751 1762328 main.go:141] libmachine: (test-preload-126856) ensuring networks are active...
	I0127 12:18:50.479454 1762328 main.go:141] libmachine: (test-preload-126856) Ensuring network default is active
	I0127 12:18:50.479818 1762328 main.go:141] libmachine: (test-preload-126856) Ensuring network mk-test-preload-126856 is active
	I0127 12:18:50.480188 1762328 main.go:141] libmachine: (test-preload-126856) getting domain XML...
	I0127 12:18:50.481143 1762328 main.go:141] libmachine: (test-preload-126856) creating domain...
	I0127 12:18:51.665929 1762328 main.go:141] libmachine: (test-preload-126856) waiting for IP...
	I0127 12:18:51.666801 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:51.667208 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:51.667345 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:51.667222 1762395 retry.go:31] will retry after 242.029342ms: waiting for domain to come up
	I0127 12:18:51.910659 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:51.911058 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:51.911098 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:51.911049 1762395 retry.go:31] will retry after 314.234682ms: waiting for domain to come up
	I0127 12:18:52.226454 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:52.226950 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:52.226983 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:52.226912 1762395 retry.go:31] will retry after 457.848737ms: waiting for domain to come up
	I0127 12:18:52.686517 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:52.686932 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:52.686964 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:52.686886 1762395 retry.go:31] will retry after 549.834544ms: waiting for domain to come up
	I0127 12:18:53.238709 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:53.239117 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:53.239156 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:53.239043 1762395 retry.go:31] will retry after 476.886791ms: waiting for domain to come up
	I0127 12:18:53.717929 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:53.718474 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:53.718516 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:53.718450 1762395 retry.go:31] will retry after 940.397227ms: waiting for domain to come up
	I0127 12:18:54.660143 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:54.660605 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:54.660634 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:54.660560 1762395 retry.go:31] will retry after 1.130726956s: waiting for domain to come up
	I0127 12:18:55.793303 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:55.793698 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:55.793734 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:55.793651 1762395 retry.go:31] will retry after 912.713118ms: waiting for domain to come up
	I0127 12:18:56.707748 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:56.708167 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:56.708197 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:56.708145 1762395 retry.go:31] will retry after 1.647548123s: waiting for domain to come up
	I0127 12:18:58.357913 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:18:58.358351 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:18:58.358379 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:18:58.358303 1762395 retry.go:31] will retry after 1.697957811s: waiting for domain to come up
	I0127 12:19:00.058397 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:00.058814 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:19:00.058852 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:19:00.058797 1762395 retry.go:31] will retry after 1.860659479s: waiting for domain to come up
	I0127 12:19:01.920905 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:01.921271 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:19:01.921297 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:19:01.921231 1762395 retry.go:31] will retry after 3.42246739s: waiting for domain to come up
	I0127 12:19:05.347822 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:05.348224 1762328 main.go:141] libmachine: (test-preload-126856) DBG | unable to find current IP address of domain test-preload-126856 in network mk-test-preload-126856
	I0127 12:19:05.348313 1762328 main.go:141] libmachine: (test-preload-126856) DBG | I0127 12:19:05.348191 1762395 retry.go:31] will retry after 3.195831165s: waiting for domain to come up
	I0127 12:19:08.547893 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.548328 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has current primary IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.548344 1762328 main.go:141] libmachine: (test-preload-126856) found domain IP: 192.168.39.180
	I0127 12:19:08.548353 1762328 main.go:141] libmachine: (test-preload-126856) reserving static IP address...
	I0127 12:19:08.548810 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "test-preload-126856", mac: "52:54:00:0d:11:18", ip: "192.168.39.180"} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.548843 1762328 main.go:141] libmachine: (test-preload-126856) reserved static IP address 192.168.39.180 for domain test-preload-126856
	I0127 12:19:08.548864 1762328 main.go:141] libmachine: (test-preload-126856) DBG | skip adding static IP to network mk-test-preload-126856 - found existing host DHCP lease matching {name: "test-preload-126856", mac: "52:54:00:0d:11:18", ip: "192.168.39.180"}
	I0127 12:19:08.548882 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Getting to WaitForSSH function...
	I0127 12:19:08.548896 1762328 main.go:141] libmachine: (test-preload-126856) waiting for SSH...
	I0127 12:19:08.551126 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.551446 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.551482 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.551585 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Using SSH client type: external
	I0127 12:19:08.551611 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa (-rw-------)
	I0127 12:19:08.551659 1762328 main.go:141] libmachine: (test-preload-126856) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:19:08.551679 1762328 main.go:141] libmachine: (test-preload-126856) DBG | About to run SSH command:
	I0127 12:19:08.551693 1762328 main.go:141] libmachine: (test-preload-126856) DBG | exit 0
	I0127 12:19:08.678329 1762328 main.go:141] libmachine: (test-preload-126856) DBG | SSH cmd err, output: <nil>: 
	I0127 12:19:08.678651 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetConfigRaw
	I0127 12:19:08.679335 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetIP
	I0127 12:19:08.681586 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.681944 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.681988 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.682171 1762328 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/config.json ...
	I0127 12:19:08.682351 1762328 machine.go:93] provisionDockerMachine start ...
	I0127 12:19:08.682369 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:08.682559 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:08.684493 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.684827 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.684859 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.684944 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:08.685107 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.685250 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.685367 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:08.685517 1762328 main.go:141] libmachine: Using SSH client type: native
	I0127 12:19:08.685745 1762328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0127 12:19:08.685758 1762328 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:19:08.794415 1762328 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:19:08.794443 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetMachineName
	I0127 12:19:08.794694 1762328 buildroot.go:166] provisioning hostname "test-preload-126856"
	I0127 12:19:08.794738 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetMachineName
	I0127 12:19:08.794968 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:08.797310 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.797632 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.797663 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.797767 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:08.797966 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.798081 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.798186 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:08.798385 1762328 main.go:141] libmachine: Using SSH client type: native
	I0127 12:19:08.798606 1762328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0127 12:19:08.798625 1762328 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-126856 && echo "test-preload-126856" | sudo tee /etc/hostname
	I0127 12:19:08.925299 1762328 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-126856
	
	I0127 12:19:08.925333 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:08.927907 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.928289 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:08.928313 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:08.928516 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:08.928731 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.928922 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:08.929038 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:08.929186 1762328 main.go:141] libmachine: Using SSH client type: native
	I0127 12:19:08.929355 1762328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0127 12:19:08.929370 1762328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-126856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-126856/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-126856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:19:09.042855 1762328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:19:09.042890 1762328 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:19:09.042911 1762328 buildroot.go:174] setting up certificates
	I0127 12:19:09.042922 1762328 provision.go:84] configureAuth start
	I0127 12:19:09.042931 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetMachineName
	I0127 12:19:09.043251 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetIP
	I0127 12:19:09.045865 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.046163 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.046203 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.046315 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.048454 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.048737 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.048771 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.048938 1762328 provision.go:143] copyHostCerts
	I0127 12:19:09.049003 1762328 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:19:09.049013 1762328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:19:09.049088 1762328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:19:09.049186 1762328 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:19:09.049195 1762328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:19:09.049222 1762328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:19:09.049288 1762328 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:19:09.049305 1762328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:19:09.049332 1762328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:19:09.049389 1762328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.test-preload-126856 san=[127.0.0.1 192.168.39.180 localhost minikube test-preload-126856]
	I0127 12:19:09.112942 1762328 provision.go:177] copyRemoteCerts
	I0127 12:19:09.113003 1762328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:19:09.113029 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.115294 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.115584 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.115622 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.115736 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.115919 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.116103 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.116241 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:09.200102 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:19:09.222585 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 12:19:09.243207 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:19:09.263597 1762328 provision.go:87] duration metric: took 220.664666ms to configureAuth
	I0127 12:19:09.263620 1762328 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:19:09.263761 1762328 config.go:182] Loaded profile config "test-preload-126856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 12:19:09.263856 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.266436 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.266791 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.266826 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.267031 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.267220 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.267381 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.267520 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.267681 1762328 main.go:141] libmachine: Using SSH client type: native
	I0127 12:19:09.267887 1762328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0127 12:19:09.267911 1762328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:19:09.489182 1762328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:19:09.489213 1762328 machine.go:96] duration metric: took 806.849623ms to provisionDockerMachine
	I0127 12:19:09.489226 1762328 start.go:293] postStartSetup for "test-preload-126856" (driver="kvm2")
	I0127 12:19:09.489238 1762328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:19:09.489270 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:09.489592 1762328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:19:09.489632 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.492091 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.492400 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.492425 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.492522 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.492717 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.492875 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.493032 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:09.576239 1762328 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:19:09.579852 1762328 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:19:09.579880 1762328 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:19:09.579953 1762328 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:19:09.580055 1762328 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:19:09.580187 1762328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:19:09.588569 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:19:09.610478 1762328 start.go:296] duration metric: took 121.238974ms for postStartSetup
	I0127 12:19:09.610515 1762328 fix.go:56] duration metric: took 19.154245739s for fixHost
	I0127 12:19:09.610545 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.613102 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.613478 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.613509 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.613627 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.613811 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.614013 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.614140 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.614311 1762328 main.go:141] libmachine: Using SSH client type: native
	I0127 12:19:09.614475 1762328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0127 12:19:09.614485 1762328 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:19:09.723052 1762328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980349.681828555
	
	I0127 12:19:09.723085 1762328 fix.go:216] guest clock: 1737980349.681828555
	I0127 12:19:09.723096 1762328 fix.go:229] Guest: 2025-01-27 12:19:09.681828555 +0000 UTC Remote: 2025-01-27 12:19:09.610519146 +0000 UTC m=+31.338984302 (delta=71.309409ms)
	I0127 12:19:09.723126 1762328 fix.go:200] guest clock delta is within tolerance: 71.309409ms
	I0127 12:19:09.723134 1762328 start.go:83] releasing machines lock for "test-preload-126856", held for 19.266880596s
	I0127 12:19:09.723161 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:09.723404 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetIP
	I0127 12:19:09.726034 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.726411 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.726445 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.726561 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:09.727032 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:09.727198 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:09.727304 1762328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:19:09.727349 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.727372 1762328 ssh_runner.go:195] Run: cat /version.json
	I0127 12:19:09.727399 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:09.729469 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.729753 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.729783 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.729804 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.729919 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.730080 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.730220 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.730251 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:09.730285 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:09.730363 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:09.730462 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:09.730599 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:09.730762 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:09.730900 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:09.843335 1762328 ssh_runner.go:195] Run: systemctl --version
	I0127 12:19:09.848749 1762328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:19:09.985771 1762328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:19:09.991366 1762328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:19:09.991431 1762328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:19:10.006442 1762328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:19:10.006468 1762328 start.go:495] detecting cgroup driver to use...
	I0127 12:19:10.006534 1762328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:19:10.020953 1762328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:19:10.033661 1762328 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:19:10.033718 1762328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:19:10.046058 1762328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:19:10.058184 1762328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:19:10.173070 1762328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:19:10.322852 1762328 docker.go:233] disabling docker service ...
	I0127 12:19:10.322928 1762328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:19:10.336230 1762328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:19:10.347916 1762328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:19:10.465359 1762328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:19:10.579754 1762328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:19:10.592716 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:19:10.608960 1762328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 12:19:10.609012 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.618318 1762328 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:19:10.618369 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.627827 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.636916 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.645961 1762328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:19:10.655309 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.664430 1762328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.681426 1762328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:19:10.690708 1762328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:19:10.698852 1762328 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:19:10.698900 1762328 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:19:10.710077 1762328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:19:10.718401 1762328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:19:10.824547 1762328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:19:10.909975 1762328 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:19:10.910059 1762328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:19:10.914413 1762328 start.go:563] Will wait 60s for crictl version
	I0127 12:19:10.914477 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:10.917783 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:19:10.954617 1762328 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:19:10.954728 1762328 ssh_runner.go:195] Run: crio --version
	I0127 12:19:10.980997 1762328 ssh_runner.go:195] Run: crio --version
	I0127 12:19:11.008839 1762328 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 12:19:11.010053 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetIP
	I0127 12:19:11.012846 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:11.013233 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:11.013267 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:11.013444 1762328 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:19:11.017302 1762328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:19:11.030406 1762328 kubeadm.go:883] updating cluster {Name:test-preload-126856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-126856 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:19:11.030512 1762328 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 12:19:11.030573 1762328 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:19:11.069234 1762328 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 12:19:11.069308 1762328 ssh_runner.go:195] Run: which lz4
	I0127 12:19:11.073201 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:19:11.077244 1762328 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:19:11.077276 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 12:19:12.386225 1762328 crio.go:462] duration metric: took 1.31305147s to copy over tarball
	I0127 12:19:12.386305 1762328 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:19:14.659802 1762328 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273463037s)
	I0127 12:19:14.659841 1762328 crio.go:469] duration metric: took 2.27358586s to extract the tarball
	I0127 12:19:14.659849 1762328 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:19:14.700167 1762328 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:19:14.738643 1762328 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 12:19:14.738674 1762328 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 12:19:14.738768 1762328 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:19:14.738796 1762328 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 12:19:14.738801 1762328 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:14.738878 1762328 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:14.738822 1762328 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:14.738913 1762328 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:14.738827 1762328 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:14.738769 1762328 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:14.740296 1762328 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:14.740314 1762328 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:19:14.740329 1762328 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 12:19:14.740299 1762328 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:14.740297 1762328 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:14.740304 1762328 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:14.740304 1762328 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:14.740304 1762328 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:14.946272 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:14.964834 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:14.975522 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:14.982495 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:14.987067 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 12:19:14.990922 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:14.997950 1762328 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 12:19:14.997992 1762328 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:14.998035 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.001085 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:15.074183 1762328 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 12:19:15.074234 1762328 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:15.074252 1762328 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 12:19:15.074272 1762328 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:15.074284 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.074303 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.109717 1762328 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 12:19:15.109767 1762328 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:15.109815 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.109730 1762328 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 12:19:15.109888 1762328 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 12:19:15.109907 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.122933 1762328 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 12:19:15.122969 1762328 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:15.123014 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.123032 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:15.123049 1762328 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 12:19:15.123082 1762328 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:15.123094 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:15.123107 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:15.123109 1762328 ssh_runner.go:195] Run: which crictl
	I0127 12:19:15.123147 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 12:19:15.123179 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:15.227659 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:15.227738 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:15.227775 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:15.227810 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:15.227839 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:15.227917 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 12:19:15.227924 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:15.329362 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:15.329442 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 12:19:15.403063 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 12:19:15.403150 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:15.403185 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 12:19:15.403237 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 12:19:15.403280 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 12:19:15.403331 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 12:19:15.403411 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 12:19:15.406966 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 12:19:15.523299 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 12:19:15.523418 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 12:19:15.526106 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 12:19:15.526181 1762328 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 12:19:15.526194 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 12:19:15.526244 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 12:19:15.526301 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 12:19:15.526342 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 12:19:15.526343 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 12:19:15.526397 1762328 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 12:19:15.526397 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 12:19:15.526418 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 12:19:15.531128 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 12:19:15.531216 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 12:19:15.535129 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 12:19:15.574632 1762328 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 12:19:15.574659 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 12:19:15.574704 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 12:19:15.574763 1762328 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 12:19:15.912501 1762328 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:19:18.497072 1762328 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.970646648s)
	I0127 12:19:18.497112 1762328 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.970666162s)
	I0127 12:19:18.497132 1762328 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.965895862s)
	I0127 12:19:18.497152 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 12:19:18.497137 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 12:19:18.497194 1762328 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 12:19:18.497122 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 12:19:18.497215 1762328 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.922431963s)
	I0127 12:19:18.497258 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 12:19:18.497268 1762328 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 12:19:18.497311 1762328 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.58477483s)
	I0127 12:19:20.546018 1762328 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.048734628s)
	I0127 12:19:20.546046 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 12:19:20.546074 1762328 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 12:19:20.546122 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 12:19:20.687541 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 12:19:20.687600 1762328 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 12:19:20.687655 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 12:19:21.531986 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 12:19:21.532039 1762328 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 12:19:21.532100 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 12:19:21.876531 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 12:19:21.876586 1762328 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 12:19:21.876647 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 12:19:22.523854 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 12:19:22.523900 1762328 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 12:19:22.523950 1762328 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 12:19:23.269620 1762328 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 12:19:23.269668 1762328 cache_images.go:123] Successfully loaded all cached images
	I0127 12:19:23.269673 1762328 cache_images.go:92] duration metric: took 8.530985704s to LoadCachedImages
	I0127 12:19:23.269696 1762328 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.24.4 crio true true} ...
	I0127 12:19:23.269795 1762328 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-126856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-126856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:19:23.269858 1762328 ssh_runner.go:195] Run: crio config
	I0127 12:19:23.319321 1762328 cni.go:84] Creating CNI manager for ""
	I0127 12:19:23.319342 1762328 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:19:23.319351 1762328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:19:23.319370 1762328 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-126856 NodeName:test-preload-126856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:19:23.319504 1762328 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-126856"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:19:23.319563 1762328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 12:19:23.329008 1762328 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:19:23.329064 1762328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:19:23.337856 1762328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0127 12:19:23.353038 1762328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:19:23.367749 1762328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0127 12:19:23.382787 1762328 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0127 12:19:23.386030 1762328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:19:23.396792 1762328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:19:23.506628 1762328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:19:23.522888 1762328 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856 for IP: 192.168.39.180
	I0127 12:19:23.522916 1762328 certs.go:194] generating shared ca certs ...
	I0127 12:19:23.522937 1762328 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:19:23.523175 1762328 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:19:23.523274 1762328 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:19:23.523291 1762328 certs.go:256] generating profile certs ...
	I0127 12:19:23.523390 1762328 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/client.key
	I0127 12:19:23.523450 1762328 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/apiserver.key.66f28b18
	I0127 12:19:23.523492 1762328 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/proxy-client.key
	I0127 12:19:23.523602 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:19:23.523631 1762328 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:19:23.523639 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:19:23.523663 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:19:23.523688 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:19:23.523711 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:19:23.523779 1762328 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:19:23.524576 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:19:23.558820 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:19:23.587469 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:19:23.615103 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:19:23.637939 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 12:19:23.661101 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:19:23.682451 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:19:23.722191 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:19:23.743991 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:19:23.765310 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:19:23.786392 1762328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:19:23.807249 1762328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:19:23.822241 1762328 ssh_runner.go:195] Run: openssl version
	I0127 12:19:23.827404 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:19:23.836726 1762328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:19:23.840711 1762328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:19:23.840749 1762328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:19:23.845954 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:19:23.855393 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:19:23.864612 1762328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:19:23.868491 1762328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:19:23.868539 1762328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:19:23.873601 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:19:23.882798 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:19:23.892238 1762328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:19:23.896168 1762328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:19:23.896258 1762328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:19:23.901391 1762328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:19:23.910927 1762328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:19:23.914897 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:19:23.920169 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:19:23.925358 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:19:23.930593 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:19:23.935664 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:19:23.940655 1762328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:19:23.945718 1762328 kubeadm.go:392] StartCluster: {Name:test-preload-126856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-126856 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:19:23.945796 1762328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:19:23.945836 1762328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:19:23.979556 1762328 cri.go:89] found id: ""
	I0127 12:19:23.979622 1762328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:19:23.988831 1762328 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:19:23.988857 1762328 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:19:23.988935 1762328 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:19:23.997688 1762328 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:19:23.998132 1762328 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-126856" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:19:23.998284 1762328 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-1724227/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-126856" cluster setting kubeconfig missing "test-preload-126856" context setting]
	I0127 12:19:23.998670 1762328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:19:23.999357 1762328 kapi.go:59] client config for test-preload-126856: &rest.Config{Host:"https://192.168.39.180:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/client.crt", KeyFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/client.key", CAFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:19:24.000045 1762328 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:19:24.008379 1762328 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.180
	I0127 12:19:24.008424 1762328 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:19:24.008446 1762328 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 12:19:24.008496 1762328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:19:24.040654 1762328 cri.go:89] found id: ""
	I0127 12:19:24.040737 1762328 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:19:24.056306 1762328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:19:24.065223 1762328 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:19:24.065243 1762328 kubeadm.go:157] found existing configuration files:
	
	I0127 12:19:24.065295 1762328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:19:24.074484 1762328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:19:24.074524 1762328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:19:24.083156 1762328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:19:24.091243 1762328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:19:24.091292 1762328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:19:24.099666 1762328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:19:24.107575 1762328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:19:24.107625 1762328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:19:24.116208 1762328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:19:24.124295 1762328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:19:24.124330 1762328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:19:24.132615 1762328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:19:24.141053 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:24.239030 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:25.069657 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:25.319035 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:25.387947 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:25.492632 1762328 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:19:25.492739 1762328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:19:25.993479 1762328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:19:26.493484 1762328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:19:26.508751 1762328 api_server.go:72] duration metric: took 1.016116394s to wait for apiserver process to appear ...
	I0127 12:19:26.508777 1762328 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:19:26.508802 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:26.509257 1762328 api_server.go:269] stopped: https://192.168.39.180:8443/healthz: Get "https://192.168.39.180:8443/healthz": dial tcp 192.168.39.180:8443: connect: connection refused
	I0127 12:19:27.008840 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:30.409574 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:19:30.409604 1762328 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:19:30.409622 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:30.535957 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:19:30.535989 1762328 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:19:30.536008 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:30.544322 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:19:30.544359 1762328 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:19:31.009022 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:31.014456 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:19:31.014536 1762328 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:19:31.509187 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:31.514636 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:19:31.514662 1762328 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:19:32.009279 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:32.014652 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0127 12:19:32.021378 1762328 api_server.go:141] control plane version: v1.24.4
	I0127 12:19:32.021402 1762328 api_server.go:131] duration metric: took 5.512618219s to wait for apiserver health ...
	I0127 12:19:32.021412 1762328 cni.go:84] Creating CNI manager for ""
	I0127 12:19:32.021422 1762328 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:19:32.022983 1762328 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:19:32.024398 1762328 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:19:32.043220 1762328 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:19:32.060616 1762328 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:19:32.060713 1762328 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 12:19:32.060749 1762328 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 12:19:32.072507 1762328 system_pods.go:59] 7 kube-system pods found
	I0127 12:19:32.072536 1762328 system_pods.go:61] "coredns-6d4b75cb6d-md8mg" [a7e85a16-30d7-4452-adb2-e151e664fd9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:19:32.072544 1762328 system_pods.go:61] "etcd-test-preload-126856" [52c1adf6-043f-4352-9ffb-115db2f76ec7] Running
	I0127 12:19:32.072553 1762328 system_pods.go:61] "kube-apiserver-test-preload-126856" [371a1e6a-6e23-4671-abb0-eeebce4709ac] Running
	I0127 12:19:32.072565 1762328 system_pods.go:61] "kube-controller-manager-test-preload-126856" [21a7a990-d812-44ee-a0bd-46fd41be15ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:19:32.072583 1762328 system_pods.go:61] "kube-proxy-vk66g" [e63705d9-bd02-41b7-8249-ad09420f07c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:19:32.072589 1762328 system_pods.go:61] "kube-scheduler-test-preload-126856" [7beac1ca-a8a0-42c8-b8a4-e319600638a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:19:32.072598 1762328 system_pods.go:61] "storage-provisioner" [fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:19:32.072608 1762328 system_pods.go:74] duration metric: took 11.968016ms to wait for pod list to return data ...
	I0127 12:19:32.072617 1762328 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:19:32.076116 1762328 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:19:32.076144 1762328 node_conditions.go:123] node cpu capacity is 2
	I0127 12:19:32.076157 1762328 node_conditions.go:105] duration metric: took 3.535774ms to run NodePressure ...
	I0127 12:19:32.076176 1762328 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:19:32.257302 1762328 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:19:32.263308 1762328 kubeadm.go:739] kubelet initialised
	I0127 12:19:32.263330 1762328 kubeadm.go:740] duration metric: took 5.996577ms waiting for restarted kubelet to initialise ...
	I0127 12:19:32.263339 1762328 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:19:32.267719 1762328 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:32.272931 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.272960 1762328 pod_ready.go:82] duration metric: took 5.217576ms for pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:32.272969 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.272978 1762328 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:32.283774 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "etcd-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.283798 1762328 pod_ready.go:82] duration metric: took 10.812801ms for pod "etcd-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:32.283807 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "etcd-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.283813 1762328 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:32.289533 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "kube-apiserver-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.289556 1762328 pod_ready.go:82] duration metric: took 5.734061ms for pod "kube-apiserver-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:32.289564 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "kube-apiserver-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.289570 1762328 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:32.464117 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.464149 1762328 pod_ready.go:82] duration metric: took 174.569406ms for pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:32.464160 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.464166 1762328 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vk66g" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:32.864306 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "kube-proxy-vk66g" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.864339 1762328 pod_ready.go:82] duration metric: took 400.16428ms for pod "kube-proxy-vk66g" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:32.864350 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "kube-proxy-vk66g" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:32.864356 1762328 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:33.264286 1762328 pod_ready.go:98] node "test-preload-126856" hosting pod "kube-scheduler-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:33.264318 1762328 pod_ready.go:82] duration metric: took 399.954329ms for pod "kube-scheduler-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	E0127 12:19:33.264329 1762328 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-126856" hosting pod "kube-scheduler-test-preload-126856" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:33.264336 1762328 pod_ready.go:39] duration metric: took 1.000988217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:19:33.264364 1762328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:19:33.276575 1762328 ops.go:34] apiserver oom_adj: -16
	I0127 12:19:33.276613 1762328 kubeadm.go:597] duration metric: took 9.287737105s to restartPrimaryControlPlane
	I0127 12:19:33.276627 1762328 kubeadm.go:394] duration metric: took 9.330913985s to StartCluster
	I0127 12:19:33.276663 1762328 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:19:33.276757 1762328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:19:33.277402 1762328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:19:33.277676 1762328 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:19:33.277880 1762328 config.go:182] Loaded profile config "test-preload-126856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 12:19:33.277839 1762328 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:19:33.277945 1762328 addons.go:69] Setting storage-provisioner=true in profile "test-preload-126856"
	I0127 12:19:33.277976 1762328 addons.go:238] Setting addon storage-provisioner=true in "test-preload-126856"
	W0127 12:19:33.277988 1762328 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:19:33.278026 1762328 host.go:66] Checking if "test-preload-126856" exists ...
	I0127 12:19:33.277950 1762328 addons.go:69] Setting default-storageclass=true in profile "test-preload-126856"
	I0127 12:19:33.278063 1762328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-126856"
	I0127 12:19:33.278428 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:19:33.278472 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:19:33.278507 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:19:33.278562 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:19:33.279477 1762328 out.go:177] * Verifying Kubernetes components...
	I0127 12:19:33.280893 1762328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:19:33.294585 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0127 12:19:33.294602 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0127 12:19:33.295100 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:19:33.295192 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:19:33.295599 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:19:33.295627 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:19:33.295776 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:19:33.295798 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:19:33.295966 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:19:33.296181 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:19:33.296377 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetState
	I0127 12:19:33.296569 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:19:33.296618 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:19:33.298465 1762328 kapi.go:59] client config for test-preload-126856: &rest.Config{Host:"https://192.168.39.180:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/client.crt", KeyFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/test-preload-126856/client.key", CAFile:"/home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:19:33.298728 1762328 addons.go:238] Setting addon default-storageclass=true in "test-preload-126856"
	W0127 12:19:33.298763 1762328 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:19:33.298792 1762328 host.go:66] Checking if "test-preload-126856" exists ...
	I0127 12:19:33.299049 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:19:33.299088 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:19:33.311775 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0127 12:19:33.312241 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:19:33.312789 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:19:33.312811 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:19:33.313164 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:19:33.313218 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0127 12:19:33.313652 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:19:33.314123 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:19:33.314139 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:19:33.314437 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:19:33.315078 1762328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:19:33.315130 1762328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:19:33.322466 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetState
	I0127 12:19:33.324454 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:33.326222 1762328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:19:33.327882 1762328 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:19:33.327916 1762328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:19:33.327949 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:33.331341 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:33.331832 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:33.331857 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:33.332070 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:33.332284 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:33.332459 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:33.332618 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:33.354249 1762328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0127 12:19:33.354697 1762328 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:19:33.355203 1762328 main.go:141] libmachine: Using API Version  1
	I0127 12:19:33.355226 1762328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:19:33.355496 1762328 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:19:33.355677 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetState
	I0127 12:19:33.357102 1762328 main.go:141] libmachine: (test-preload-126856) Calling .DriverName
	I0127 12:19:33.357355 1762328 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:19:33.357381 1762328 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:19:33.357401 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHHostname
	I0127 12:19:33.360043 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:33.360823 1762328 main.go:141] libmachine: (test-preload-126856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:11:18", ip: ""} in network mk-test-preload-126856: {Iface:virbr1 ExpiryTime:2025-01-27 13:19:01 +0000 UTC Type:0 Mac:52:54:00:0d:11:18 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-126856 Clientid:01:52:54:00:0d:11:18}
	I0127 12:19:33.360826 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHPort
	I0127 12:19:33.360861 1762328 main.go:141] libmachine: (test-preload-126856) DBG | domain test-preload-126856 has defined IP address 192.168.39.180 and MAC address 52:54:00:0d:11:18 in network mk-test-preload-126856
	I0127 12:19:33.361016 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHKeyPath
	I0127 12:19:33.361198 1762328 main.go:141] libmachine: (test-preload-126856) Calling .GetSSHUsername
	I0127 12:19:33.361327 1762328 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/test-preload-126856/id_rsa Username:docker}
	I0127 12:19:33.464501 1762328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:19:33.481593 1762328 node_ready.go:35] waiting up to 6m0s for node "test-preload-126856" to be "Ready" ...
	I0127 12:19:33.563329 1762328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:19:33.619148 1762328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:19:34.534268 1762328 main.go:141] libmachine: Making call to close driver server
	I0127 12:19:34.534292 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Close
	I0127 12:19:34.534329 1762328 main.go:141] libmachine: Making call to close driver server
	I0127 12:19:34.534351 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Close
	I0127 12:19:34.534601 1762328 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:19:34.534618 1762328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:19:34.534627 1762328 main.go:141] libmachine: Making call to close driver server
	I0127 12:19:34.534623 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Closing plugin on server side
	I0127 12:19:34.534634 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Close
	I0127 12:19:34.534636 1762328 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:19:34.534650 1762328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:19:34.534660 1762328 main.go:141] libmachine: Making call to close driver server
	I0127 12:19:34.534672 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Close
	I0127 12:19:34.534966 1762328 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:19:34.534984 1762328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:19:34.535050 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Closing plugin on server side
	I0127 12:19:34.535071 1762328 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:19:34.535088 1762328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:19:34.543325 1762328 main.go:141] libmachine: Making call to close driver server
	I0127 12:19:34.543339 1762328 main.go:141] libmachine: (test-preload-126856) Calling .Close
	I0127 12:19:34.543598 1762328 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:19:34.543613 1762328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:19:34.543656 1762328 main.go:141] libmachine: (test-preload-126856) DBG | Closing plugin on server side
	I0127 12:19:34.545067 1762328 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:19:34.546199 1762328 addons.go:514] duration metric: took 1.268396712s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:19:35.487433 1762328 node_ready.go:53] node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:37.984939 1762328 node_ready.go:53] node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:39.985841 1762328 node_ready.go:53] node "test-preload-126856" has status "Ready":"False"
	I0127 12:19:40.985991 1762328 node_ready.go:49] node "test-preload-126856" has status "Ready":"True"
	I0127 12:19:40.986024 1762328 node_ready.go:38] duration metric: took 7.504403325s for node "test-preload-126856" to be "Ready" ...
	I0127 12:19:40.986037 1762328 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:19:40.990735 1762328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:40.995388 1762328 pod_ready.go:93] pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:40.995409 1762328 pod_ready.go:82] duration metric: took 4.632391ms for pod "coredns-6d4b75cb6d-md8mg" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:40.995417 1762328 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:40.999366 1762328 pod_ready.go:93] pod "etcd-test-preload-126856" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:40.999389 1762328 pod_ready.go:82] duration metric: took 3.965275ms for pod "etcd-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:40.999400 1762328 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.003628 1762328 pod_ready.go:93] pod "kube-apiserver-test-preload-126856" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:41.003650 1762328 pod_ready.go:82] duration metric: took 4.242395ms for pod "kube-apiserver-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.003658 1762328 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.007870 1762328 pod_ready.go:93] pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:41.007886 1762328 pod_ready.go:82] duration metric: took 4.221846ms for pod "kube-controller-manager-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.007899 1762328 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vk66g" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.386707 1762328 pod_ready.go:93] pod "kube-proxy-vk66g" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:41.386734 1762328 pod_ready.go:82] duration metric: took 378.829677ms for pod "kube-proxy-vk66g" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.386765 1762328 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.785704 1762328 pod_ready.go:93] pod "kube-scheduler-test-preload-126856" in "kube-system" namespace has status "Ready":"True"
	I0127 12:19:41.785736 1762328 pod_ready.go:82] duration metric: took 398.963496ms for pod "kube-scheduler-test-preload-126856" in "kube-system" namespace to be "Ready" ...
	I0127 12:19:41.785748 1762328 pod_ready.go:39] duration metric: took 799.700445ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:19:41.785764 1762328 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:19:41.785822 1762328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:19:41.800739 1762328 api_server.go:72] duration metric: took 8.523021374s to wait for apiserver process to appear ...
	I0127 12:19:41.800763 1762328 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:19:41.800780 1762328 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0127 12:19:41.805424 1762328 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0127 12:19:41.806147 1762328 api_server.go:141] control plane version: v1.24.4
	I0127 12:19:41.806165 1762328 api_server.go:131] duration metric: took 5.397132ms to wait for apiserver health ...
	I0127 12:19:41.806173 1762328 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:19:41.989571 1762328 system_pods.go:59] 7 kube-system pods found
	I0127 12:19:41.989612 1762328 system_pods.go:61] "coredns-6d4b75cb6d-md8mg" [a7e85a16-30d7-4452-adb2-e151e664fd9a] Running
	I0127 12:19:41.989628 1762328 system_pods.go:61] "etcd-test-preload-126856" [52c1adf6-043f-4352-9ffb-115db2f76ec7] Running
	I0127 12:19:41.989637 1762328 system_pods.go:61] "kube-apiserver-test-preload-126856" [371a1e6a-6e23-4671-abb0-eeebce4709ac] Running
	I0127 12:19:41.989643 1762328 system_pods.go:61] "kube-controller-manager-test-preload-126856" [21a7a990-d812-44ee-a0bd-46fd41be15ca] Running
	I0127 12:19:41.989647 1762328 system_pods.go:61] "kube-proxy-vk66g" [e63705d9-bd02-41b7-8249-ad09420f07c3] Running
	I0127 12:19:41.989652 1762328 system_pods.go:61] "kube-scheduler-test-preload-126856" [7beac1ca-a8a0-42c8-b8a4-e319600638a7] Running
	I0127 12:19:41.989657 1762328 system_pods.go:61] "storage-provisioner" [fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb] Running
	I0127 12:19:41.989664 1762328 system_pods.go:74] duration metric: took 183.484543ms to wait for pod list to return data ...
	I0127 12:19:41.989677 1762328 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:19:42.185752 1762328 default_sa.go:45] found service account: "default"
	I0127 12:19:42.185783 1762328 default_sa.go:55] duration metric: took 196.089458ms for default service account to be created ...
	I0127 12:19:42.185793 1762328 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:19:42.388443 1762328 system_pods.go:87] 7 kube-system pods found
	I0127 12:19:42.587436 1762328 system_pods.go:105] "coredns-6d4b75cb6d-md8mg" [a7e85a16-30d7-4452-adb2-e151e664fd9a] Running
	I0127 12:19:42.587465 1762328 system_pods.go:105] "etcd-test-preload-126856" [52c1adf6-043f-4352-9ffb-115db2f76ec7] Running
	I0127 12:19:42.587470 1762328 system_pods.go:105] "kube-apiserver-test-preload-126856" [371a1e6a-6e23-4671-abb0-eeebce4709ac] Running
	I0127 12:19:42.587475 1762328 system_pods.go:105] "kube-controller-manager-test-preload-126856" [21a7a990-d812-44ee-a0bd-46fd41be15ca] Running
	I0127 12:19:42.587480 1762328 system_pods.go:105] "kube-proxy-vk66g" [e63705d9-bd02-41b7-8249-ad09420f07c3] Running
	I0127 12:19:42.587484 1762328 system_pods.go:105] "kube-scheduler-test-preload-126856" [7beac1ca-a8a0-42c8-b8a4-e319600638a7] Running
	I0127 12:19:42.587489 1762328 system_pods.go:105] "storage-provisioner" [fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb] Running
	I0127 12:19:42.587496 1762328 system_pods.go:147] duration metric: took 401.696791ms to wait for k8s-apps to be running ...
	I0127 12:19:42.587504 1762328 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:19:42.587552 1762328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:19:42.603549 1762328 system_svc.go:56] duration metric: took 16.034044ms WaitForService to wait for kubelet
	I0127 12:19:42.603584 1762328 kubeadm.go:582] duration metric: took 9.32587088s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:19:42.603604 1762328 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:19:42.786094 1762328 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:19:42.786125 1762328 node_conditions.go:123] node cpu capacity is 2
	I0127 12:19:42.786136 1762328 node_conditions.go:105] duration metric: took 182.527957ms to run NodePressure ...
	I0127 12:19:42.786150 1762328 start.go:241] waiting for startup goroutines ...
	I0127 12:19:42.786157 1762328 start.go:246] waiting for cluster config update ...
	I0127 12:19:42.786168 1762328 start.go:255] writing updated cluster config ...
	I0127 12:19:42.786423 1762328 ssh_runner.go:195] Run: rm -f paused
	I0127 12:19:42.835723 1762328 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 12:19:42.837628 1762328 out.go:201] 
	W0127 12:19:42.838945 1762328 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 12:19:42.840174 1762328 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 12:19:42.841529 1762328 out.go:177] * Done! kubectl is now configured to use "test-preload-126856" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.725758460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980383725735983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd858b99-46bf-4557-b7ce-ba77e556825d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.726323802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b22c65a0-0f9a-495a-b40e-af44dcc02548 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.726430594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b22c65a0-0f9a-495a-b40e-af44dcc02548 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.726725291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b293e7381e160de216fafd68233d250a354ee5ba45b59382a87c6f737d8cb8,PodSandboxId:52e9843f3496fc35c9c3938824ee0de33a03ddcc1623fd64c70dd044fe9ec7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737980378504468091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-md8mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e85a16-30d7-4452-adb2-e151e664fd9a,},Annotations:map[string]string{io.kubernetes.container.hash: 9830e5cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2d25044667955ade04ad308491343b224b37d5ad1a5394c8069f5b9557ebe,PodSandboxId:a18cd154410ffbee44cb3c4b94127769b7b039afb714d450ac422c6e16421372,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737980371715860197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk66g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e63705d9-bd02-41b7-8249-ad09420f07c3,},Annotations:map[string]string{io.kubernetes.container.hash: 587fa6c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eabf762d01b89d32b1c73799790216b6db7d18e265cd7ab303987b39a7a04af,PodSandboxId:824ecb07b7d8fa6bb03d2ccc305d23003fe8a81a51fc6c3a453fecb257111d32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980371120890593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb,},Annotations:map[string]string{io.kubernetes.container.hash: ecf53501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3f367b9a309d6092036b1d7d4bb5ed0c82f121b8a59d7ba0fc8411493452c5,PodSandboxId:1bf477f22caadbf8c3661514276de111cad2d3930c39b844cf13b174477b05ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737980366098317285,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec7cba60
b59e92e6ac8953e1ae238b7,},Annotations:map[string]string{io.kubernetes.container.hash: cf09b767,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fa5dbaecaaa982cd153240c71e7e8908bb315f4bbefc133c6246091203ae00,PodSandboxId:5965b18c50cb862811da1f116e665ad154b2691b15ec39a7408edb0d7b789d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737980366160251310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5d327f77942c9250789
6512e4342eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8444b704e42f721a06de1a010a891b1b328438373b47c38b1222e3343310fd,PodSandboxId:fea3454c6eefe3a7c210b0bcb87a6c3774b733064228fe45d9c33cd9d33688eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737980366134540202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60eda53820aadca96dc44a492ac3f3a7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f0105f00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f3319c76173eb27c764a22fdf18144e4d348a86fb430fdcab029512372a3a1,PodSandboxId:d2262dee94516ecf74b6ad1dfa1d16467bb518dac6c630c6a957cda29739a6ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737980366071803958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df781d0612ad29438ea3fda4a97f694a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b22c65a0-0f9a-495a-b40e-af44dcc02548 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.780490198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81f3fb2f-8fc4-494a-8e0f-e91abbe107fd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.780583965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81f3fb2f-8fc4-494a-8e0f-e91abbe107fd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.781604254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd5dc93d-19ed-4f0b-9825-788f5398d764 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.782072740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980383782010909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd5dc93d-19ed-4f0b-9825-788f5398d764 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.782487928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c12cc67-7f1c-4a1e-bd0f-904beabe0ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.782555706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c12cc67-7f1c-4a1e-bd0f-904beabe0ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.782726579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b293e7381e160de216fafd68233d250a354ee5ba45b59382a87c6f737d8cb8,PodSandboxId:52e9843f3496fc35c9c3938824ee0de33a03ddcc1623fd64c70dd044fe9ec7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737980378504468091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-md8mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e85a16-30d7-4452-adb2-e151e664fd9a,},Annotations:map[string]string{io.kubernetes.container.hash: 9830e5cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2d25044667955ade04ad308491343b224b37d5ad1a5394c8069f5b9557ebe,PodSandboxId:a18cd154410ffbee44cb3c4b94127769b7b039afb714d450ac422c6e16421372,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737980371715860197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk66g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e63705d9-bd02-41b7-8249-ad09420f07c3,},Annotations:map[string]string{io.kubernetes.container.hash: 587fa6c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eabf762d01b89d32b1c73799790216b6db7d18e265cd7ab303987b39a7a04af,PodSandboxId:824ecb07b7d8fa6bb03d2ccc305d23003fe8a81a51fc6c3a453fecb257111d32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980371120890593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb,},Annotations:map[string]string{io.kubernetes.container.hash: ecf53501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3f367b9a309d6092036b1d7d4bb5ed0c82f121b8a59d7ba0fc8411493452c5,PodSandboxId:1bf477f22caadbf8c3661514276de111cad2d3930c39b844cf13b174477b05ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737980366098317285,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec7cba60
b59e92e6ac8953e1ae238b7,},Annotations:map[string]string{io.kubernetes.container.hash: cf09b767,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fa5dbaecaaa982cd153240c71e7e8908bb315f4bbefc133c6246091203ae00,PodSandboxId:5965b18c50cb862811da1f116e665ad154b2691b15ec39a7408edb0d7b789d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737980366160251310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5d327f77942c9250789
6512e4342eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8444b704e42f721a06de1a010a891b1b328438373b47c38b1222e3343310fd,PodSandboxId:fea3454c6eefe3a7c210b0bcb87a6c3774b733064228fe45d9c33cd9d33688eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737980366134540202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60eda53820aadca96dc44a492ac3f3a7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f0105f00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f3319c76173eb27c764a22fdf18144e4d348a86fb430fdcab029512372a3a1,PodSandboxId:d2262dee94516ecf74b6ad1dfa1d16467bb518dac6c630c6a957cda29739a6ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737980366071803958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df781d0612ad29438ea3fda4a97f694a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c12cc67-7f1c-4a1e-bd0f-904beabe0ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.823759894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bbd5e8c-512c-461a-9766-c2d914c141bd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.823841539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bbd5e8c-512c-461a-9766-c2d914c141bd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.825323003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a6f2398-3cd7-4eb9-ba68-9382e82a2882 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.825794920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980383825753547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a6f2398-3cd7-4eb9-ba68-9382e82a2882 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.826431244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f3f471c-17dd-49c5-b7a4-87d23d4eebef name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.826491670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f3f471c-17dd-49c5-b7a4-87d23d4eebef name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.826677554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b293e7381e160de216fafd68233d250a354ee5ba45b59382a87c6f737d8cb8,PodSandboxId:52e9843f3496fc35c9c3938824ee0de33a03ddcc1623fd64c70dd044fe9ec7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737980378504468091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-md8mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e85a16-30d7-4452-adb2-e151e664fd9a,},Annotations:map[string]string{io.kubernetes.container.hash: 9830e5cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2d25044667955ade04ad308491343b224b37d5ad1a5394c8069f5b9557ebe,PodSandboxId:a18cd154410ffbee44cb3c4b94127769b7b039afb714d450ac422c6e16421372,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737980371715860197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk66g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e63705d9-bd02-41b7-8249-ad09420f07c3,},Annotations:map[string]string{io.kubernetes.container.hash: 587fa6c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eabf762d01b89d32b1c73799790216b6db7d18e265cd7ab303987b39a7a04af,PodSandboxId:824ecb07b7d8fa6bb03d2ccc305d23003fe8a81a51fc6c3a453fecb257111d32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980371120890593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb,},Annotations:map[string]string{io.kubernetes.container.hash: ecf53501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3f367b9a309d6092036b1d7d4bb5ed0c82f121b8a59d7ba0fc8411493452c5,PodSandboxId:1bf477f22caadbf8c3661514276de111cad2d3930c39b844cf13b174477b05ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737980366098317285,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec7cba60
b59e92e6ac8953e1ae238b7,},Annotations:map[string]string{io.kubernetes.container.hash: cf09b767,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fa5dbaecaaa982cd153240c71e7e8908bb315f4bbefc133c6246091203ae00,PodSandboxId:5965b18c50cb862811da1f116e665ad154b2691b15ec39a7408edb0d7b789d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737980366160251310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5d327f77942c9250789
6512e4342eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8444b704e42f721a06de1a010a891b1b328438373b47c38b1222e3343310fd,PodSandboxId:fea3454c6eefe3a7c210b0bcb87a6c3774b733064228fe45d9c33cd9d33688eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737980366134540202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60eda53820aadca96dc44a492ac3f3a7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f0105f00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f3319c76173eb27c764a22fdf18144e4d348a86fb430fdcab029512372a3a1,PodSandboxId:d2262dee94516ecf74b6ad1dfa1d16467bb518dac6c630c6a957cda29739a6ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737980366071803958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df781d0612ad29438ea3fda4a97f694a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f3f471c-17dd-49c5-b7a4-87d23d4eebef name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.857138034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bf522c0-d4e4-449b-a519-3f92963a71bc name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.857255584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bf522c0-d4e4-449b-a519-3f92963a71bc name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.858307503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92b9e7a0-f965-4885-a499-ca8d4f17ab24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.858799359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980383858776736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92b9e7a0-f965-4885-a499-ca8d4f17ab24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.859488369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=185d960b-2508-4d02-a545-ecfaeda64604 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.859567392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=185d960b-2508-4d02-a545-ecfaeda64604 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:43 test-preload-126856 crio[666]: time="2025-01-27 12:19:43.859755166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b293e7381e160de216fafd68233d250a354ee5ba45b59382a87c6f737d8cb8,PodSandboxId:52e9843f3496fc35c9c3938824ee0de33a03ddcc1623fd64c70dd044fe9ec7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737980378504468091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-md8mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e85a16-30d7-4452-adb2-e151e664fd9a,},Annotations:map[string]string{io.kubernetes.container.hash: 9830e5cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2d25044667955ade04ad308491343b224b37d5ad1a5394c8069f5b9557ebe,PodSandboxId:a18cd154410ffbee44cb3c4b94127769b7b039afb714d450ac422c6e16421372,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737980371715860197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk66g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e63705d9-bd02-41b7-8249-ad09420f07c3,},Annotations:map[string]string{io.kubernetes.container.hash: 587fa6c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eabf762d01b89d32b1c73799790216b6db7d18e265cd7ab303987b39a7a04af,PodSandboxId:824ecb07b7d8fa6bb03d2ccc305d23003fe8a81a51fc6c3a453fecb257111d32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980371120890593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb,},Annotations:map[string]string{io.kubernetes.container.hash: ecf53501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3f367b9a309d6092036b1d7d4bb5ed0c82f121b8a59d7ba0fc8411493452c5,PodSandboxId:1bf477f22caadbf8c3661514276de111cad2d3930c39b844cf13b174477b05ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737980366098317285,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec7cba60
b59e92e6ac8953e1ae238b7,},Annotations:map[string]string{io.kubernetes.container.hash: cf09b767,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fa5dbaecaaa982cd153240c71e7e8908bb315f4bbefc133c6246091203ae00,PodSandboxId:5965b18c50cb862811da1f116e665ad154b2691b15ec39a7408edb0d7b789d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737980366160251310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5d327f77942c9250789
6512e4342eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8444b704e42f721a06de1a010a891b1b328438373b47c38b1222e3343310fd,PodSandboxId:fea3454c6eefe3a7c210b0bcb87a6c3774b733064228fe45d9c33cd9d33688eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737980366134540202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60eda53820aadca96dc44a492ac3f3a7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f0105f00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f3319c76173eb27c764a22fdf18144e4d348a86fb430fdcab029512372a3a1,PodSandboxId:d2262dee94516ecf74b6ad1dfa1d16467bb518dac6c630c6a957cda29739a6ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737980366071803958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-126856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df781d0612ad29438ea3fda4a97f694a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=185d960b-2508-4d02-a545-ecfaeda64604 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3b293e7381e1       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   52e9843f3496f       coredns-6d4b75cb6d-md8mg
	4ac2d25044667       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   a18cd154410ff       kube-proxy-vk66g
	0eabf762d01b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   824ecb07b7d8f       storage-provisioner
	28fa5dbaecaaa       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   5965b18c50cb8       kube-scheduler-test-preload-126856
	8f8444b704e42       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   fea3454c6eefe       etcd-test-preload-126856
	ba3f367b9a309       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   1bf477f22caad       kube-apiserver-test-preload-126856
	59f3319c76173       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   d2262dee94516       kube-controller-manager-test-preload-126856
	
	
	==> coredns [b3b293e7381e160de216fafd68233d250a354ee5ba45b59382a87c6f737d8cb8] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46388 - 41508 "HINFO IN 6396345227925244642.8015566041598207831. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.15388675s
	
	
	==> describe nodes <==
	Name:               test-preload-126856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-126856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=test-preload-126856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_18_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:18:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-126856
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:19:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:19:40 +0000   Mon, 27 Jan 2025 12:18:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:19:40 +0000   Mon, 27 Jan 2025 12:18:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:19:40 +0000   Mon, 27 Jan 2025 12:18:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:19:40 +0000   Mon, 27 Jan 2025 12:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    test-preload-126856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a16de25482645d0ad3f4b26d2f29f1a
	  System UUID:                7a16de25-4826-45d0-ad3f-4b26d2f29f1a
	  Boot ID:                    7c16ba4e-7e1a-4c3e-9554-00cf3a44b3a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-md8mg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-test-preload-126856                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         98s
	  kube-system                 kube-apiserver-test-preload-126856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-126856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-vk66g                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-126856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node test-preload-126856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node test-preload-126856 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node test-preload-126856 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                kubelet          Node test-preload-126856 status is now: NodeReady
	  Normal  RegisteredNode           86s                node-controller  Node test-preload-126856 event: Registered Node test-preload-126856 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-126856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-126856 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-126856 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-126856 event: Registered Node test-preload-126856 in Controller
	
	
	==> dmesg <==
	[Jan27 12:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052372] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037375] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.816487] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.942635] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Jan27 12:19] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.767560] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.062320] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056432] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.181090] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.113550] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.248536] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +12.680977] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.056775] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.743968] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +5.872264] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.215285] systemd-fstab-generator[1757]: Ignoring "noauto" option for root device
	[  +5.018834] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.467905] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [8f8444b704e42f721a06de1a010a891b1b328438373b47c38b1222e3343310fd] <==
	{"level":"info","ts":"2025-01-27T12:19:26.501Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b38c55c42a3b698","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T12:19:26.526Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T12:19:26.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2025-01-27T12:19:26.528Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2025-01-27T12:19:26.528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:19:26.528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:19:26.530Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T12:19:26.533Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T12:19:26.533Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T12:19:26.533Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2025-01-27T12:19:26.533Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 2"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T12:19:28.059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2025-01-27T12:19:28.063Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:test-preload-126856 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T12:19:28.063Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:19:28.065Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:19:28.065Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2025-01-27T12:19:28.066Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:19:28.066Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:19:28.066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:19:44 up 0 min,  0 users,  load average: 0.58, 0.17, 0.06
	Linux test-preload-126856 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ba3f367b9a309d6092036b1d7d4bb5ed0c82f121b8a59d7ba0fc8411493452c5] <==
	I0127 12:19:30.402826       1 controller.go:85] Starting OpenAPI V3 controller
	I0127 12:19:30.403133       1 naming_controller.go:291] Starting NamingConditionController
	I0127 12:19:30.403662       1 establishing_controller.go:76] Starting EstablishingController
	I0127 12:19:30.403760       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0127 12:19:30.403920       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:19:30.404010       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 12:19:30.457324       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 12:19:30.479096       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0127 12:19:30.497459       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0127 12:19:30.528396       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 12:19:30.531612       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 12:19:30.532150       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:19:30.536134       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 12:19:30.537319       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:19:30.539565       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 12:19:31.025654       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 12:19:31.325631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:19:31.947805       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 12:19:32.143320       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 12:19:32.150208       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 12:19:32.184955       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 12:19:32.203908       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:19:32.209163       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:19:43.547786       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 12:19:43.740205       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59f3319c76173eb27c764a22fdf18144e4d348a86fb430fdcab029512372a3a1] <==
	I0127 12:19:43.507647       1 shared_informer.go:262] Caches are synced for ephemeral
	I0127 12:19:43.512417       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 12:19:43.518731       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0127 12:19:43.520381       1 shared_informer.go:262] Caches are synced for crt configmap
	I0127 12:19:43.522593       1 shared_informer.go:262] Caches are synced for deployment
	I0127 12:19:43.525002       1 shared_informer.go:262] Caches are synced for stateful set
	I0127 12:19:43.525543       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:19:43.527932       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:19:43.528878       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:19:43.531275       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:19:43.531307       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0127 12:19:43.536712       1 shared_informer.go:262] Caches are synced for PVC protection
	I0127 12:19:43.538648       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0127 12:19:43.538708       1 shared_informer.go:262] Caches are synced for TTL
	I0127 12:19:43.587485       1 shared_informer.go:262] Caches are synced for persistent volume
	I0127 12:19:43.594800       1 shared_informer.go:262] Caches are synced for expand
	I0127 12:19:43.614197       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0127 12:19:43.640748       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0127 12:19:43.651396       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 12:19:43.662416       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 12:19:43.751659       1 shared_informer.go:262] Caches are synced for namespace
	I0127 12:19:43.777543       1 shared_informer.go:262] Caches are synced for service account
	I0127 12:19:44.219762       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 12:19:44.219798       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 12:19:44.219870       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [4ac2d25044667955ade04ad308491343b224b37d5ad1a5394c8069f5b9557ebe] <==
	I0127 12:19:31.909356       1 node.go:163] Successfully retrieved node IP: 192.168.39.180
	I0127 12:19:31.909715       1 server_others.go:138] "Detected node IP" address="192.168.39.180"
	I0127 12:19:31.909853       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 12:19:31.940613       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 12:19:31.940632       1 server_others.go:206] "Using iptables Proxier"
	I0127 12:19:31.940862       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 12:19:31.941330       1 server.go:661] "Version info" version="v1.24.4"
	I0127 12:19:31.941353       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:19:31.943009       1 config.go:317] "Starting service config controller"
	I0127 12:19:31.943292       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 12:19:31.943336       1 config.go:226] "Starting endpoint slice config controller"
	I0127 12:19:31.943341       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 12:19:31.944298       1 config.go:444] "Starting node config controller"
	I0127 12:19:31.944344       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 12:19:32.043611       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0127 12:19:32.043650       1 shared_informer.go:262] Caches are synced for service config
	I0127 12:19:32.044930       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [28fa5dbaecaaa982cd153240c71e7e8908bb315f4bbefc133c6246091203ae00] <==
	I0127 12:19:27.181323       1 serving.go:348] Generated self-signed cert in-memory
	W0127 12:19:30.434134       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 12:19:30.434391       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 12:19:30.434471       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:19:30.434509       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:19:30.524589       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 12:19:30.524624       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:19:30.538954       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 12:19:30.540772       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:19:30.540858       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:19:30.540962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 12:19:30.640957       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.447147    1125 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: E0127 12:19:30.447693    1125 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.502538    1125 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-126856"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.502725    1125 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-126856"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.508980    1125 setters.go:532] "Node became not ready" node="test-preload-126856" condition={Type:Ready Status:False LastHeartbeatTime:2025-01-27 12:19:30.508881769 +0000 UTC m=+5.228437878 LastTransitionTime:2025-01-27 12:19:30.508881769 +0000 UTC m=+5.228437878 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548346    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldv9q\" (UniqueName: \"kubernetes.io/projected/fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb-kube-api-access-ldv9q\") pod \"storage-provisioner\" (UID: \"fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb\") " pod="kube-system/storage-provisioner"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548395    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65xc\" (UniqueName: \"kubernetes.io/projected/a7e85a16-30d7-4452-adb2-e151e664fd9a-kube-api-access-j65xc\") pod \"coredns-6d4b75cb6d-md8mg\" (UID: \"a7e85a16-30d7-4452-adb2-e151e664fd9a\") " pod="kube-system/coredns-6d4b75cb6d-md8mg"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548441    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e63705d9-bd02-41b7-8249-ad09420f07c3-kube-proxy\") pod \"kube-proxy-vk66g\" (UID: \"e63705d9-bd02-41b7-8249-ad09420f07c3\") " pod="kube-system/kube-proxy-vk66g"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548462    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e63705d9-bd02-41b7-8249-ad09420f07c3-xtables-lock\") pod \"kube-proxy-vk66g\" (UID: \"e63705d9-bd02-41b7-8249-ad09420f07c3\") " pod="kube-system/kube-proxy-vk66g"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548492    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb-tmp\") pod \"storage-provisioner\" (UID: \"fb6bb6cc-8a30-4da3-bfaa-8a3b869f7ecb\") " pod="kube-system/storage-provisioner"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548531    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume\") pod \"coredns-6d4b75cb6d-md8mg\" (UID: \"a7e85a16-30d7-4452-adb2-e151e664fd9a\") " pod="kube-system/coredns-6d4b75cb6d-md8mg"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548548    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e63705d9-bd02-41b7-8249-ad09420f07c3-lib-modules\") pod \"kube-proxy-vk66g\" (UID: \"e63705d9-bd02-41b7-8249-ad09420f07c3\") " pod="kube-system/kube-proxy-vk66g"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548567    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4xs\" (UniqueName: \"kubernetes.io/projected/e63705d9-bd02-41b7-8249-ad09420f07c3-kube-api-access-sn4xs\") pod \"kube-proxy-vk66g\" (UID: \"e63705d9-bd02-41b7-8249-ad09420f07c3\") " pod="kube-system/kube-proxy-vk66g"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: I0127 12:19:30.548582    1125 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: E0127 12:19:30.567614    1125 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-test-preload-126856\" already exists" pod="kube-system/kube-controller-manager-test-preload-126856"
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: E0127 12:19:30.652844    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 12:19:30 test-preload-126856 kubelet[1125]: E0127 12:19:30.653127    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume podName:a7e85a16-30d7-4452-adb2-e151e664fd9a nodeName:}" failed. No retries permitted until 2025-01-27 12:19:31.153082171 +0000 UTC m=+5.872638271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume") pod "coredns-6d4b75cb6d-md8mg" (UID: "a7e85a16-30d7-4452-adb2-e151e664fd9a") : object "kube-system"/"coredns" not registered
	Jan 27 12:19:31 test-preload-126856 kubelet[1125]: E0127 12:19:31.157447    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 12:19:31 test-preload-126856 kubelet[1125]: E0127 12:19:31.157547    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume podName:a7e85a16-30d7-4452-adb2-e151e664fd9a nodeName:}" failed. No retries permitted until 2025-01-27 12:19:32.157525568 +0000 UTC m=+6.877081665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume") pod "coredns-6d4b75cb6d-md8mg" (UID: "a7e85a16-30d7-4452-adb2-e151e664fd9a") : object "kube-system"/"coredns" not registered
	Jan 27 12:19:32 test-preload-126856 kubelet[1125]: E0127 12:19:32.167485    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 12:19:32 test-preload-126856 kubelet[1125]: E0127 12:19:32.167577    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume podName:a7e85a16-30d7-4452-adb2-e151e664fd9a nodeName:}" failed. No retries permitted until 2025-01-27 12:19:34.167560404 +0000 UTC m=+8.887116502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume") pod "coredns-6d4b75cb6d-md8mg" (UID: "a7e85a16-30d7-4452-adb2-e151e664fd9a") : object "kube-system"/"coredns" not registered
	Jan 27 12:19:32 test-preload-126856 kubelet[1125]: E0127 12:19:32.491691    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-md8mg" podUID=a7e85a16-30d7-4452-adb2-e151e664fd9a
	Jan 27 12:19:34 test-preload-126856 kubelet[1125]: E0127 12:19:34.184476    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 12:19:34 test-preload-126856 kubelet[1125]: E0127 12:19:34.184603    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume podName:a7e85a16-30d7-4452-adb2-e151e664fd9a nodeName:}" failed. No retries permitted until 2025-01-27 12:19:38.184583193 +0000 UTC m=+12.904139303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a7e85a16-30d7-4452-adb2-e151e664fd9a-config-volume") pod "coredns-6d4b75cb6d-md8mg" (UID: "a7e85a16-30d7-4452-adb2-e151e664fd9a") : object "kube-system"/"coredns" not registered
	Jan 27 12:19:34 test-preload-126856 kubelet[1125]: E0127 12:19:34.493129    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-md8mg" podUID=a7e85a16-30d7-4452-adb2-e151e664fd9a
	
	
	==> storage-provisioner [0eabf762d01b89d32b1c73799790216b6db7d18e265cd7ab303987b39a7a04af] <==
	I0127 12:19:31.197862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-126856 -n test-preload-126856
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-126856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-126856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-126856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-126856: (1.171295945s)
--- FAIL: TestPreload (170.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (376.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m22.099277211s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-029294] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-029294" primary control-plane node in "kubernetes-upgrade-029294" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:22:23.456204 1764580 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:22:23.456480 1764580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:22:23.456491 1764580 out.go:358] Setting ErrFile to fd 2...
	I0127 12:22:23.456496 1764580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:22:23.456675 1764580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:22:23.457241 1764580 out.go:352] Setting JSON to false
	I0127 12:22:23.458241 1764580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":32684,"bootTime":1737947859,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:22:23.458342 1764580 start.go:139] virtualization: kvm guest
	I0127 12:22:23.460196 1764580 out.go:177] * [kubernetes-upgrade-029294] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:22:23.461509 1764580 notify.go:220] Checking for updates...
	I0127 12:22:23.461523 1764580 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:22:23.462699 1764580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:22:23.463865 1764580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:22:23.464968 1764580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:22:23.466070 1764580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:22:23.467365 1764580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:22:23.468925 1764580 config.go:182] Loaded profile config "NoKubernetes-270668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:22:23.469043 1764580 config.go:182] Loaded profile config "offline-crio-266554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:22:23.469135 1764580 config.go:182] Loaded profile config "running-upgrade-385378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 12:22:23.469223 1764580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:22:23.504125 1764580 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:22:23.505064 1764580 start.go:297] selected driver: kvm2
	I0127 12:22:23.505087 1764580 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:22:23.505104 1764580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:22:23.506120 1764580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:22:23.506245 1764580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:22:23.520945 1764580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:22:23.521004 1764580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:22:23.521309 1764580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:22:23.521343 1764580 cni.go:84] Creating CNI manager for ""
	I0127 12:22:23.521404 1764580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:22:23.521415 1764580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:22:23.521485 1764580 start.go:340] cluster config:
	{Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:22:23.521616 1764580 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:22:23.523637 1764580 out.go:177] * Starting "kubernetes-upgrade-029294" primary control-plane node in "kubernetes-upgrade-029294" cluster
	I0127 12:22:23.524565 1764580 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:22:23.524600 1764580 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 12:22:23.524611 1764580 cache.go:56] Caching tarball of preloaded images
	I0127 12:22:23.524690 1764580 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:22:23.524703 1764580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 12:22:23.524802 1764580 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/config.json ...
	I0127 12:22:23.524825 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/config.json: {Name:mk249ab8d8a15978272911d08b6acbecc5c5009c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:22:23.524977 1764580 start.go:360] acquireMachinesLock for kubernetes-upgrade-029294: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:23:16.022962 1764580 start.go:364] duration metric: took 52.497947935s to acquireMachinesLock for "kubernetes-upgrade-029294"
	I0127 12:23:16.023047 1764580 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:23:16.023194 1764580 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:23:16.025733 1764580 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:23:16.025957 1764580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:23:16.026021 1764580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:23:16.046461 1764580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0127 12:23:16.047032 1764580 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:23:16.047676 1764580 main.go:141] libmachine: Using API Version  1
	I0127 12:23:16.047708 1764580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:23:16.048080 1764580 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:23:16.048282 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:23:16.048443 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:16.048609 1764580 start.go:159] libmachine.API.Create for "kubernetes-upgrade-029294" (driver="kvm2")
	I0127 12:23:16.048667 1764580 client.go:168] LocalClient.Create starting
	I0127 12:23:16.048703 1764580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:23:16.048738 1764580 main.go:141] libmachine: Decoding PEM data...
	I0127 12:23:16.048763 1764580 main.go:141] libmachine: Parsing certificate...
	I0127 12:23:16.048833 1764580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:23:16.048861 1764580 main.go:141] libmachine: Decoding PEM data...
	I0127 12:23:16.048876 1764580 main.go:141] libmachine: Parsing certificate...
	I0127 12:23:16.048909 1764580 main.go:141] libmachine: Running pre-create checks...
	I0127 12:23:16.048949 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .PreCreateCheck
	I0127 12:23:16.049302 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetConfigRaw
	I0127 12:23:16.049738 1764580 main.go:141] libmachine: Creating machine...
	I0127 12:23:16.049755 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .Create
	I0127 12:23:16.049910 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) creating KVM machine...
	I0127 12:23:16.049928 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) creating network...
	I0127 12:23:16.050814 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found existing default KVM network
	I0127 12:23:16.051982 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.051812 1765355 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:9b:4d} reservation:<nil>}
	I0127 12:23:16.052780 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.052699 1765355 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221ce0}
	I0127 12:23:16.052801 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | created network xml: 
	I0127 12:23:16.052811 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | <network>
	I0127 12:23:16.052832 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   <name>mk-kubernetes-upgrade-029294</name>
	I0127 12:23:16.052848 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   <dns enable='no'/>
	I0127 12:23:16.052858 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   
	I0127 12:23:16.052868 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 12:23:16.052872 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |     <dhcp>
	I0127 12:23:16.052880 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 12:23:16.052886 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |     </dhcp>
	I0127 12:23:16.052892 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   </ip>
	I0127 12:23:16.052902 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG |   
	I0127 12:23:16.052918 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | </network>
	I0127 12:23:16.052933 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | 
	I0127 12:23:16.057768 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | trying to create private KVM network mk-kubernetes-upgrade-029294 192.168.50.0/24...
	I0127 12:23:16.126431 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | private KVM network mk-kubernetes-upgrade-029294 192.168.50.0/24 created
	I0127 12:23:16.126596 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294 ...
	I0127 12:23:16.126622 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:23:16.126642 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.126568 1765355 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:23:16.126739 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:23:16.394926 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.394781 1765355 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa...
	I0127 12:23:16.667193 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.667075 1765355 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/kubernetes-upgrade-029294.rawdisk...
	I0127 12:23:16.667222 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | Writing magic tar header
	I0127 12:23:16.667291 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | Writing SSH key tar header
	I0127 12:23:16.667339 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:16.667194 1765355 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294 ...
	I0127 12:23:16.667352 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294 (perms=drwx------)
	I0127 12:23:16.667364 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:23:16.667378 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:23:16.667394 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294
	I0127 12:23:16.667409 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:23:16.667424 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:23:16.667433 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:23:16.667440 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:23:16.667449 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) creating domain...
	I0127 12:23:16.667464 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:23:16.667485 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:23:16.667495 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:23:16.667504 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home/jenkins
	I0127 12:23:16.667517 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | checking permissions on dir: /home
	I0127 12:23:16.667527 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | skipping /home - not owner
	I0127 12:23:16.668531 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) define libvirt domain using xml: 
	I0127 12:23:16.668554 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) <domain type='kvm'>
	I0127 12:23:16.668566 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <name>kubernetes-upgrade-029294</name>
	I0127 12:23:16.668578 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <memory unit='MiB'>2200</memory>
	I0127 12:23:16.668592 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <vcpu>2</vcpu>
	I0127 12:23:16.668601 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <features>
	I0127 12:23:16.668615 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <acpi/>
	I0127 12:23:16.668630 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <apic/>
	I0127 12:23:16.668642 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <pae/>
	I0127 12:23:16.668651 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     
	I0127 12:23:16.668670 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   </features>
	I0127 12:23:16.668686 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <cpu mode='host-passthrough'>
	I0127 12:23:16.668693 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   
	I0127 12:23:16.668699 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   </cpu>
	I0127 12:23:16.668736 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <os>
	I0127 12:23:16.668761 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <type>hvm</type>
	I0127 12:23:16.668775 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <boot dev='cdrom'/>
	I0127 12:23:16.668786 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <boot dev='hd'/>
	I0127 12:23:16.668800 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <bootmenu enable='no'/>
	I0127 12:23:16.668811 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   </os>
	I0127 12:23:16.668822 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   <devices>
	I0127 12:23:16.668835 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <disk type='file' device='cdrom'>
	I0127 12:23:16.668880 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/boot2docker.iso'/>
	I0127 12:23:16.668893 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <target dev='hdc' bus='scsi'/>
	I0127 12:23:16.668903 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <readonly/>
	I0127 12:23:16.668916 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </disk>
	I0127 12:23:16.668932 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <disk type='file' device='disk'>
	I0127 12:23:16.668947 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:23:16.668964 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/kubernetes-upgrade-029294.rawdisk'/>
	I0127 12:23:16.668977 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <target dev='hda' bus='virtio'/>
	I0127 12:23:16.668987 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </disk>
	I0127 12:23:16.668998 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <interface type='network'>
	I0127 12:23:16.669015 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <source network='mk-kubernetes-upgrade-029294'/>
	I0127 12:23:16.669028 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <model type='virtio'/>
	I0127 12:23:16.669039 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </interface>
	I0127 12:23:16.669061 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <interface type='network'>
	I0127 12:23:16.669073 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <source network='default'/>
	I0127 12:23:16.669098 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <model type='virtio'/>
	I0127 12:23:16.669123 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </interface>
	I0127 12:23:16.669135 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <serial type='pty'>
	I0127 12:23:16.669146 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <target port='0'/>
	I0127 12:23:16.669156 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </serial>
	I0127 12:23:16.669165 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <console type='pty'>
	I0127 12:23:16.669178 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <target type='serial' port='0'/>
	I0127 12:23:16.669189 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </console>
	I0127 12:23:16.669213 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     <rng model='virtio'>
	I0127 12:23:16.669228 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)       <backend model='random'>/dev/random</backend>
	I0127 12:23:16.669240 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     </rng>
	I0127 12:23:16.669250 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     
	I0127 12:23:16.669259 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)     
	I0127 12:23:16.669266 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294)   </devices>
	I0127 12:23:16.669275 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) </domain>
	I0127 12:23:16.669285 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) 
	I0127 12:23:16.672506 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:ee:cd:07 in network default
	I0127 12:23:16.673016 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) starting domain...
	I0127 12:23:16.673031 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) ensuring networks are active...
	I0127 12:23:16.673044 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:16.673677 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Ensuring network default is active
	I0127 12:23:16.674014 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Ensuring network mk-kubernetes-upgrade-029294 is active
	I0127 12:23:16.674525 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) getting domain XML...
	I0127 12:23:16.675227 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) creating domain...
	I0127 12:23:17.916671 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) waiting for IP...
	I0127 12:23:17.917472 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:17.917902 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:17.917931 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:17.917874 1765355 retry.go:31] will retry after 201.695914ms: waiting for domain to come up
	I0127 12:23:18.121299 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.121756 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.121823 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:18.121742 1765355 retry.go:31] will retry after 311.686126ms: waiting for domain to come up
	I0127 12:23:18.435449 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.435988 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.436019 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:18.435949 1765355 retry.go:31] will retry after 395.379497ms: waiting for domain to come up
	I0127 12:23:18.833185 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.833680 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:18.833713 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:18.833654 1765355 retry.go:31] will retry after 558.882484ms: waiting for domain to come up
	I0127 12:23:19.394589 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:19.395170 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:19.395199 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:19.395121 1765355 retry.go:31] will retry after 712.125603ms: waiting for domain to come up
	I0127 12:23:20.109267 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:20.109731 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:20.109760 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:20.109696 1765355 retry.go:31] will retry after 691.977746ms: waiting for domain to come up
	I0127 12:23:20.803796 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:20.804259 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:20.804288 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:20.804238 1765355 retry.go:31] will retry after 955.246293ms: waiting for domain to come up
	I0127 12:23:21.761129 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:21.761644 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:21.761676 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:21.761601 1765355 retry.go:31] will retry after 1.310531854s: waiting for domain to come up
	I0127 12:23:23.074473 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:23.074999 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:23.075045 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:23.074963 1765355 retry.go:31] will retry after 1.271783014s: waiting for domain to come up
	I0127 12:23:24.348327 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:24.348868 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:24.348899 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:24.348851 1765355 retry.go:31] will retry after 2.27693107s: waiting for domain to come up
	I0127 12:23:26.627985 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:26.628588 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:26.628622 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:26.628564 1765355 retry.go:31] will retry after 1.773821s: waiting for domain to come up
	I0127 12:23:28.404550 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:28.405131 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:28.405160 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:28.405099 1765355 retry.go:31] will retry after 3.562514651s: waiting for domain to come up
	I0127 12:23:31.968824 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:31.969206 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:31.969246 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:31.969204 1765355 retry.go:31] will retry after 4.07489142s: waiting for domain to come up
	I0127 12:23:36.048557 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:36.049061 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find current IP address of domain kubernetes-upgrade-029294 in network mk-kubernetes-upgrade-029294
	I0127 12:23:36.049093 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | I0127 12:23:36.049044 1765355 retry.go:31] will retry after 3.564044724s: waiting for domain to come up
	I0127 12:23:39.615835 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.616387 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has current primary IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.616422 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) found domain IP: 192.168.50.10
	I0127 12:23:39.616437 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) reserving static IP address...
	I0127 12:23:39.616827 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-029294", mac: "52:54:00:f9:5b:38", ip: "192.168.50.10"} in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.690575 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | Getting to WaitForSSH function...
	I0127 12:23:39.690607 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) reserved static IP address 192.168.50.10 for domain kubernetes-upgrade-029294
	I0127 12:23:39.690620 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) waiting for SSH...
	I0127 12:23:39.693431 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.693872 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:39.693921 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.694066 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | Using SSH client type: external
	I0127 12:23:39.694082 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa (-rw-------)
	I0127 12:23:39.694130 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:23:39.694143 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | About to run SSH command:
	I0127 12:23:39.694181 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | exit 0
	I0127 12:23:39.826740 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | SSH cmd err, output: <nil>: 
	I0127 12:23:39.827042 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) KVM machine creation complete
	I0127 12:23:39.827367 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetConfigRaw
	I0127 12:23:39.827989 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:39.828213 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:39.828366 1764580 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:23:39.828381 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetState
	I0127 12:23:39.829678 1764580 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:23:39.829706 1764580 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:23:39.829713 1764580 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:23:39.829722 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:39.832036 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.832428 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:39.832452 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.832578 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:39.832755 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:39.832904 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:39.833069 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:39.833277 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:39.833491 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:39.833501 1764580 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:23:39.949970 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:23:39.950005 1764580 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:23:39.950017 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:39.952888 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.953267 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:39.953288 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:39.953432 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:39.953655 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:39.953892 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:39.954077 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:39.954248 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:39.954465 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:39.954477 1764580 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:23:40.062934 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:23:40.063039 1764580 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:23:40.063051 1764580 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:23:40.063058 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:23:40.063297 1764580 buildroot.go:166] provisioning hostname "kubernetes-upgrade-029294"
	I0127 12:23:40.063326 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:23:40.063558 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.066015 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.066372 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.066428 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.066525 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:40.066682 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.066846 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.066956 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:40.067122 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:40.067307 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:40.067319 1764580 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-029294 && echo "kubernetes-upgrade-029294" | sudo tee /etc/hostname
	I0127 12:23:40.188295 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-029294
	
	I0127 12:23:40.188330 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.191022 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.191419 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.191462 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.191617 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:40.191792 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.191956 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.192064 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:40.192238 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:40.192475 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:40.192495 1764580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-029294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-029294/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-029294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:23:40.314828 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:23:40.314863 1764580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:23:40.314915 1764580 buildroot.go:174] setting up certificates
	I0127 12:23:40.314927 1764580 provision.go:84] configureAuth start
	I0127 12:23:40.314941 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:23:40.315279 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:23:40.318104 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.318534 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.318560 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.318718 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.321055 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.321361 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.321401 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.321500 1764580 provision.go:143] copyHostCerts
	I0127 12:23:40.321587 1764580 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:23:40.321617 1764580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:23:40.321689 1764580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:23:40.321857 1764580 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:23:40.321876 1764580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:23:40.321911 1764580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:23:40.322047 1764580 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:23:40.322059 1764580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:23:40.322082 1764580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:23:40.322150 1764580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-029294 san=[127.0.0.1 192.168.50.10 kubernetes-upgrade-029294 localhost minikube]
	I0127 12:23:40.519557 1764580 provision.go:177] copyRemoteCerts
	I0127 12:23:40.519626 1764580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:23:40.519653 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.523002 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.523398 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.523430 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.523739 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:40.523934 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.524124 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:40.524282 1764580 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:23:40.613228 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:23:40.642710 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 12:23:40.669216 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:23:40.692817 1764580 provision.go:87] duration metric: took 377.875472ms to configureAuth
	I0127 12:23:40.692846 1764580 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:23:40.693056 1764580 config.go:182] Loaded profile config "kubernetes-upgrade-029294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:23:40.693151 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.696314 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.696735 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.696773 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.696971 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:40.697176 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.697356 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.697521 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:40.697726 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:40.697981 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:40.698003 1764580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:23:40.951546 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:23:40.951612 1764580 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:23:40.951629 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetURL
	I0127 12:23:40.953043 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | using libvirt version 6000000
	I0127 12:23:40.955684 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.956088 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.956133 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.956334 1764580 main.go:141] libmachine: Docker is up and running!
	I0127 12:23:40.956347 1764580 main.go:141] libmachine: Reticulating splines...
	I0127 12:23:40.956356 1764580 client.go:171] duration metric: took 24.907676401s to LocalClient.Create
	I0127 12:23:40.956383 1764580 start.go:167] duration metric: took 24.907775242s to libmachine.API.Create "kubernetes-upgrade-029294"
	I0127 12:23:40.956399 1764580 start.go:293] postStartSetup for "kubernetes-upgrade-029294" (driver="kvm2")
	I0127 12:23:40.956417 1764580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:23:40.956441 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:40.956702 1764580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:23:40.956739 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:40.959384 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.959741 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:40.959772 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:40.960032 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:40.960175 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:40.960329 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:40.960495 1764580 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:23:41.049273 1764580 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:23:41.053478 1764580 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:23:41.053503 1764580 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:23:41.053571 1764580 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:23:41.053650 1764580 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:23:41.053733 1764580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:23:41.062848 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:23:41.091399 1764580 start.go:296] duration metric: took 134.979304ms for postStartSetup
	I0127 12:23:41.091452 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetConfigRaw
	I0127 12:23:41.092125 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:23:41.094998 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.095382 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:41.095413 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.095660 1764580 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/config.json ...
	I0127 12:23:41.095891 1764580 start.go:128] duration metric: took 25.072683405s to createHost
	I0127 12:23:41.095921 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:41.098555 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.098940 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:41.098973 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.099221 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:41.099422 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:41.099598 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:41.099775 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:41.099982 1764580 main.go:141] libmachine: Using SSH client type: native
	I0127 12:23:41.100205 1764580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:23:41.100223 1764580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:23:41.214831 1764580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980621.198520379
	
	I0127 12:23:41.214853 1764580 fix.go:216] guest clock: 1737980621.198520379
	I0127 12:23:41.214863 1764580 fix.go:229] Guest: 2025-01-27 12:23:41.198520379 +0000 UTC Remote: 2025-01-27 12:23:41.09590631 +0000 UTC m=+77.679301277 (delta=102.614069ms)
	I0127 12:23:41.214892 1764580 fix.go:200] guest clock delta is within tolerance: 102.614069ms
	I0127 12:23:41.214899 1764580 start.go:83] releasing machines lock for "kubernetes-upgrade-029294", held for 25.191893212s
	I0127 12:23:41.214935 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:41.215211 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:23:41.218051 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.218390 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:41.218420 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.218617 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:41.219156 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:41.219349 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:23:41.219448 1764580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:23:41.219497 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:41.219584 1764580 ssh_runner.go:195] Run: cat /version.json
	I0127 12:23:41.219609 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:23:41.222521 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.222785 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.222845 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:41.222875 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.223155 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:41.223240 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:41.223267 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:41.223330 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:41.223528 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:41.223528 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:23:41.223732 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:23:41.223728 1764580 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:23:41.223902 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:23:41.224036 1764580 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:23:41.345978 1764580 ssh_runner.go:195] Run: systemctl --version
	I0127 12:23:41.352232 1764580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:23:41.506226 1764580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:23:41.516241 1764580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:23:41.516320 1764580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:23:41.532688 1764580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:23:41.532718 1764580 start.go:495] detecting cgroup driver to use...
	I0127 12:23:41.532796 1764580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:23:41.553049 1764580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:23:41.566595 1764580 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:23:41.566658 1764580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:23:41.579502 1764580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:23:41.593917 1764580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:23:41.722507 1764580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:23:41.923689 1764580 docker.go:233] disabling docker service ...
	I0127 12:23:41.923762 1764580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:23:41.941877 1764580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:23:41.957557 1764580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:23:42.084466 1764580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:23:42.216009 1764580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:23:42.230916 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:23:42.249341 1764580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 12:23:42.249411 1764580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:23:42.260758 1764580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:23:42.260844 1764580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:23:42.271613 1764580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:23:42.282029 1764580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:23:42.293224 1764580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:23:42.304525 1764580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:23:42.314516 1764580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:23:42.314575 1764580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:23:42.327340 1764580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:23:42.338410 1764580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:23:42.467622 1764580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:23:42.567706 1764580 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:23:42.567789 1764580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:23:42.573755 1764580 start.go:563] Will wait 60s for crictl version
	I0127 12:23:42.573834 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:42.578487 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:23:42.624216 1764580 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:23:42.624311 1764580 ssh_runner.go:195] Run: crio --version
	I0127 12:23:42.662508 1764580 ssh_runner.go:195] Run: crio --version
	I0127 12:23:42.702144 1764580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 12:23:42.703215 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:23:42.706695 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:42.707270 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:23:31 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:23:42.707325 1764580 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:23:42.707545 1764580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 12:23:42.712663 1764580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:23:42.730160 1764580 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-029294 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:23:42.730303 1764580 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:23:42.730359 1764580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:23:42.765993 1764580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:23:42.766074 1764580 ssh_runner.go:195] Run: which lz4
	I0127 12:23:42.770199 1764580 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:23:42.775502 1764580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:23:42.775533 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 12:23:44.350561 1764580 crio.go:462] duration metric: took 1.580402438s to copy over tarball
	I0127 12:23:44.350657 1764580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:23:46.915873 1764580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.565150708s)
	I0127 12:23:46.915913 1764580 crio.go:469] duration metric: took 2.565314769s to extract the tarball
	I0127 12:23:46.915924 1764580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:23:46.959402 1764580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:23:47.002606 1764580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:23:47.002631 1764580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 12:23:47.002727 1764580 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.002759 1764580 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 12:23:47.002767 1764580 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.002780 1764580 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.002787 1764580 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.002733 1764580 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.002853 1764580 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.002726 1764580 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:23:47.004468 1764580 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.004647 1764580 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 12:23:47.004872 1764580 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.004958 1764580 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:23:47.004874 1764580 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.004896 1764580 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.004905 1764580 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.004901 1764580 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.222331 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 12:23:47.237278 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.245564 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.247689 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.249925 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.265821 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.276069 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.285158 1764580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 12:23:47.285215 1764580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 12:23:47.285253 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.369575 1764580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 12:23:47.369617 1764580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 12:23:47.369637 1764580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.369654 1764580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.369696 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.369728 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.397818 1764580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 12:23:47.397837 1764580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 12:23:47.397868 1764580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.397868 1764580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.397916 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.397923 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.411039 1764580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 12:23:47.411089 1764580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.411132 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.412674 1764580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 12:23:47.412714 1764580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.412725 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:23:47.412745 1764580 ssh_runner.go:195] Run: which crictl
	I0127 12:23:47.412772 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.412828 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.412836 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.412888 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.417291 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.538064 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.538141 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.538161 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.538281 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.538314 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.637525 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:23:47.637626 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:23:47.637584 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.637593 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:23:47.637604 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:23:47.637687 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.637690 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:23:47.746072 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:23:47.801439 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 12:23:47.801529 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:23:47.801574 1764580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:23:47.801655 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 12:23:47.801758 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 12:23:47.801761 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 12:23:47.815800 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 12:23:47.854861 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 12:23:47.854903 1764580 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 12:23:48.292740 1764580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:23:48.433470 1764580 cache_images.go:92] duration metric: took 1.430820059s to LoadCachedImages
	W0127 12:23:48.433574 1764580 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0127 12:23:48.433592 1764580 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.20.0 crio true true} ...
	I0127 12:23:48.433714 1764580 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-029294 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:23:48.433796 1764580 ssh_runner.go:195] Run: crio config
	I0127 12:23:48.495676 1764580 cni.go:84] Creating CNI manager for ""
	I0127 12:23:48.495704 1764580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:23:48.495716 1764580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:23:48.495741 1764580 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-029294 NodeName:kubernetes-upgrade-029294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 12:23:48.495932 1764580 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-029294"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:23:48.496015 1764580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 12:23:48.506154 1764580 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:23:48.506226 1764580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:23:48.516118 1764580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 12:23:48.532560 1764580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:23:48.548260 1764580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 12:23:48.564256 1764580 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0127 12:23:48.568118 1764580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:23:48.579639 1764580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:23:48.711788 1764580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:23:48.729980 1764580 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294 for IP: 192.168.50.10
	I0127 12:23:48.730005 1764580 certs.go:194] generating shared ca certs ...
	I0127 12:23:48.730028 1764580 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:48.730262 1764580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:23:48.730335 1764580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:23:48.730350 1764580 certs.go:256] generating profile certs ...
	I0127 12:23:48.730411 1764580 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.key
	I0127 12:23:48.730425 1764580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.crt with IP's: []
	I0127 12:23:49.020643 1764580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.crt ...
	I0127 12:23:49.020712 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.crt: {Name:mk0b5513ea435031d2058c50748cefeb5f5eeacb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.020945 1764580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.key ...
	I0127 12:23:49.020970 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.key: {Name:mk51cf466991dafc93cb1cb048a428298c5dd802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.021093 1764580 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key.bf32c52a
	I0127 12:23:49.021118 1764580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt.bf32c52a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.10]
	I0127 12:23:49.088181 1764580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt.bf32c52a ...
	I0127 12:23:49.088214 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt.bf32c52a: {Name:mk430d74b368df16b5c70188a3a5ce04a56a12fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.088367 1764580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key.bf32c52a ...
	I0127 12:23:49.088381 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key.bf32c52a: {Name:mkd120915a721795809c787f52e5f8399c6c0221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.088453 1764580 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt.bf32c52a -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt
	I0127 12:23:49.088541 1764580 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key.bf32c52a -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key
	I0127 12:23:49.088600 1764580 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key
	I0127 12:23:49.088619 1764580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.crt with IP's: []
	I0127 12:23:49.164550 1764580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.crt ...
	I0127 12:23:49.164586 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.crt: {Name:mk8fc31988c94751393c482eb541265565110b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.164742 1764580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key ...
	I0127 12:23:49.164755 1764580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key: {Name:mk2789090ca93ffe73af338ebed5619304026ba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:23:49.164980 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:23:49.165026 1764580 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:23:49.165038 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:23:49.165063 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:23:49.165102 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:23:49.165131 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:23:49.165168 1764580 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:23:49.165777 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:23:49.191971 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:23:49.215439 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:23:49.241451 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:23:49.267845 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 12:23:49.295300 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:23:49.317950 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:23:49.342109 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:23:49.364845 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:23:49.389414 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:23:49.412302 1764580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:23:49.434512 1764580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:23:49.453243 1764580 ssh_runner.go:195] Run: openssl version
	I0127 12:23:49.459909 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:23:49.469616 1764580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:23:49.473853 1764580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:23:49.473914 1764580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:23:49.479356 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:23:49.488990 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:23:49.499515 1764580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:23:49.503614 1764580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:23:49.503672 1764580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:23:49.509044 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:23:49.519358 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:23:49.529407 1764580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:23:49.533641 1764580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:23:49.533700 1764580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:23:49.538992 1764580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:23:49.548547 1764580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:23:49.552619 1764580 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:23:49.552679 1764580 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-029294 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:23:49.552764 1764580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:23:49.552811 1764580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:23:49.593902 1764580 cri.go:89] found id: ""
	I0127 12:23:49.593999 1764580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:23:49.609798 1764580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:23:49.624951 1764580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:23:49.640209 1764580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:23:49.640234 1764580 kubeadm.go:157] found existing configuration files:
	
	I0127 12:23:49.640295 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:23:49.657006 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:23:49.657082 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:23:49.666741 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:23:49.680848 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:23:49.680916 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:23:49.696891 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:23:49.706959 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:23:49.707040 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:23:49.717640 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:23:49.726548 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:23:49.726614 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:23:49.735527 1764580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:23:49.852833 1764580 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:23:49.852936 1764580 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:23:50.003500 1764580 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:23:50.003638 1764580 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:23:50.003762 1764580 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:23:50.191931 1764580 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:23:50.435963 1764580 out.go:235]   - Generating certificates and keys ...
	I0127 12:23:50.436131 1764580 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:23:50.436235 1764580 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:23:50.436331 1764580 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:23:50.640193 1764580 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:23:50.814054 1764580 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:23:51.094284 1764580 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:23:51.329440 1764580 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:23:51.329679 1764580 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0127 12:23:51.611139 1764580 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:23:51.611382 1764580 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0127 12:23:51.793498 1764580 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:23:51.952031 1764580 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:23:51.996289 1764580 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:23:51.996605 1764580 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:23:52.119853 1764580 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:23:52.183382 1764580 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:23:52.281535 1764580 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:23:52.858931 1764580 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:23:52.876091 1764580 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:23:52.876339 1764580 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:23:52.876537 1764580 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:23:53.043672 1764580 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:23:53.044915 1764580 out.go:235]   - Booting up control plane ...
	I0127 12:23:53.045120 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:23:53.053685 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:23:53.055339 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:23:53.056363 1764580 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:23:53.061245 1764580 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:24:33.059092 1764580 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:24:33.059290 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:24:33.059549 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:24:38.059815 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:24:38.060022 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:24:48.060633 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:24:48.060954 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:25:08.061984 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:25:08.062298 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:25:48.062339 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:25:48.062616 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:25:48.062647 1764580 kubeadm.go:310] 
	I0127 12:25:48.062710 1764580 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:25:48.062784 1764580 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:25:48.062795 1764580 kubeadm.go:310] 
	I0127 12:25:48.062861 1764580 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:25:48.062902 1764580 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:25:48.062991 1764580 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:25:48.063004 1764580 kubeadm.go:310] 
	I0127 12:25:48.063097 1764580 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:25:48.063128 1764580 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:25:48.063157 1764580 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:25:48.063164 1764580 kubeadm.go:310] 
	I0127 12:25:48.063261 1764580 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:25:48.063330 1764580 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:25:48.063336 1764580 kubeadm.go:310] 
	I0127 12:25:48.063433 1764580 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:25:48.063513 1764580 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:25:48.063584 1764580 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:25:48.063649 1764580 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:25:48.063657 1764580 kubeadm.go:310] 
	I0127 12:25:48.064596 1764580 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:25:48.064688 1764580 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:25:48.064779 1764580 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 12:25:48.064914 1764580 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-029294 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 12:25:48.064965 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:25:48.559615 1764580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:25:48.572933 1764580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:25:48.582150 1764580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:25:48.582173 1764580 kubeadm.go:157] found existing configuration files:
	
	I0127 12:25:48.582229 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:25:48.590852 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:25:48.590923 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:25:48.599469 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:25:48.607513 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:25:48.607571 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:25:48.615852 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:25:48.623715 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:25:48.623764 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:25:48.631928 1764580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:25:48.639900 1764580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:25:48.639946 1764580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:25:48.648300 1764580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:25:48.710845 1764580 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:25:48.710933 1764580 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:25:48.843357 1764580 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:25:48.843507 1764580 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:25:48.843641 1764580 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:25:49.005411 1764580 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:25:49.007140 1764580 out.go:235]   - Generating certificates and keys ...
	I0127 12:25:49.007220 1764580 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:25:49.007309 1764580 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:25:49.007420 1764580 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:25:49.007511 1764580 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:25:49.007608 1764580 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:25:49.007696 1764580 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:25:49.007781 1764580 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:25:49.007980 1764580 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:25:49.008442 1764580 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:25:49.008697 1764580 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:25:49.008859 1764580 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:25:49.008940 1764580 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:25:49.178654 1764580 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:25:49.270478 1764580 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:25:49.375193 1764580 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:25:49.680251 1764580 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:25:49.693211 1764580 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:25:49.694123 1764580 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:25:49.694181 1764580 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:25:49.826932 1764580 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:25:49.828747 1764580 out.go:235]   - Booting up control plane ...
	I0127 12:25:49.828869 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:25:49.840403 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:25:49.841543 1764580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:25:49.842529 1764580 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:25:49.846826 1764580 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:26:29.848774 1764580 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:26:29.849051 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:26:29.849277 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:26:34.849728 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:26:34.850021 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:26:44.850387 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:26:44.850616 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:27:04.851691 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:27:04.851980 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:27:44.852077 1764580 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:27:44.852386 1764580 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:27:44.852413 1764580 kubeadm.go:310] 
	I0127 12:27:44.852480 1764580 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:27:44.852545 1764580 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:27:44.852556 1764580 kubeadm.go:310] 
	I0127 12:27:44.852608 1764580 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:27:44.852653 1764580 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:27:44.852804 1764580 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:27:44.852812 1764580 kubeadm.go:310] 
	I0127 12:27:44.852933 1764580 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:27:44.852982 1764580 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:27:44.853025 1764580 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:27:44.853033 1764580 kubeadm.go:310] 
	I0127 12:27:44.853180 1764580 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:27:44.853305 1764580 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:27:44.853315 1764580 kubeadm.go:310] 
	I0127 12:27:44.853509 1764580 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:27:44.853634 1764580 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:27:44.853735 1764580 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:27:44.853860 1764580 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:27:44.853875 1764580 kubeadm.go:310] 
	I0127 12:27:44.854353 1764580 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:27:44.854470 1764580 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:27:44.854577 1764580 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 12:27:44.854650 1764580 kubeadm.go:394] duration metric: took 3m55.301975096s to StartCluster
	I0127 12:27:44.854696 1764580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:27:44.854773 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:27:44.906081 1764580 cri.go:89] found id: ""
	I0127 12:27:44.906117 1764580 logs.go:282] 0 containers: []
	W0127 12:27:44.906129 1764580 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:27:44.906137 1764580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:27:44.906216 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:27:44.949300 1764580 cri.go:89] found id: ""
	I0127 12:27:44.949331 1764580 logs.go:282] 0 containers: []
	W0127 12:27:44.949342 1764580 logs.go:284] No container was found matching "etcd"
	I0127 12:27:44.949350 1764580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:27:44.949430 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:27:44.992862 1764580 cri.go:89] found id: ""
	I0127 12:27:44.992896 1764580 logs.go:282] 0 containers: []
	W0127 12:27:44.992907 1764580 logs.go:284] No container was found matching "coredns"
	I0127 12:27:44.992916 1764580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:27:44.992980 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:27:45.026625 1764580 cri.go:89] found id: ""
	I0127 12:27:45.026657 1764580 logs.go:282] 0 containers: []
	W0127 12:27:45.026665 1764580 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:27:45.026671 1764580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:27:45.026735 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:27:45.061300 1764580 cri.go:89] found id: ""
	I0127 12:27:45.061329 1764580 logs.go:282] 0 containers: []
	W0127 12:27:45.061338 1764580 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:27:45.061344 1764580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:27:45.061410 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:27:45.093406 1764580 cri.go:89] found id: ""
	I0127 12:27:45.093442 1764580 logs.go:282] 0 containers: []
	W0127 12:27:45.093454 1764580 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:27:45.093463 1764580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:27:45.093535 1764580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:27:45.127434 1764580 cri.go:89] found id: ""
	I0127 12:27:45.127468 1764580 logs.go:282] 0 containers: []
	W0127 12:27:45.127480 1764580 logs.go:284] No container was found matching "kindnet"
	I0127 12:27:45.127496 1764580 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:27:45.127513 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:27:45.248375 1764580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:27:45.248404 1764580 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:27:45.248420 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:27:45.385156 1764580 logs.go:123] Gathering logs for container status ...
	I0127 12:27:45.385201 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:27:45.431163 1764580 logs.go:123] Gathering logs for kubelet ...
	I0127 12:27:45.431200 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:27:45.483829 1764580 logs.go:123] Gathering logs for dmesg ...
	I0127 12:27:45.483875 1764580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0127 12:27:45.499540 1764580 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 12:27:45.499631 1764580 out.go:270] * 
	* 
	W0127 12:27:45.499693 1764580 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:27:45.499711 1764580 out.go:270] * 
	* 
	W0127 12:27:45.500517 1764580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:27:45.503656 1764580 out.go:201] 
	W0127 12:27:45.504774 1764580 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:27:45.504815 1764580 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 12:27:45.504840 1764580 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 12:27:45.505983 1764580 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-029294
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-029294: (2.296960892s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-029294 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-029294 status --format={{.Host}}: exit status 7 (76.297903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.262743625s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-029294 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.910807ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-029294] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-029294
	    minikube start -p kubernetes-upgrade-029294 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0292942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-029294 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-029294 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (12.775127488s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 12:28:37.119055115 +0000 UTC m=+3891.899828653
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-029294 -n kubernetes-upgrade-029294
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-029294 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-029294 logs -n 25: (1.188268541s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-956477 sudo cat                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                      | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo find                 | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo crio                 | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-956477                           | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	| start   | -p cert-options-324519                     | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:27 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-980891 ssh cat          | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-980891               | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	| start   | -p pause-502641 --memory=2048              | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:28 UTC |
	|         | --install-addons=false                     |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | cert-options-324519 ssh                    | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | openssl x509 -text -noout -in              |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt      |                           |         |         |                     |                     |
	| ssh     | -p cert-options-324519 -- sudo             | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | cat /etc/kubernetes/admin.conf             |                           |         |         |                     |                     |
	| delete  | -p cert-options-324519                     | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p old-k8s-version-488586                  | old-k8s-version-488586    | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --kvm-network=default                      |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                           |         |         |                     |                     |
	|         | --keep-context=false                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-029294               | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p kubernetes-upgrade-029294               | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p pause-502641                            | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294               | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294               | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:28:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:28:24.389657 1771790 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:28:24.389921 1771790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:24.389932 1771790 out.go:358] Setting ErrFile to fd 2...
	I0127 12:28:24.389936 1771790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:24.390120 1771790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:28:24.390685 1771790 out.go:352] Setting JSON to false
	I0127 12:28:24.391788 1771790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33045,"bootTime":1737947859,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:28:24.391898 1771790 start.go:139] virtualization: kvm guest
	I0127 12:28:24.393453 1771790 out.go:177] * [kubernetes-upgrade-029294] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:28:24.394795 1771790 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:28:24.394801 1771790 notify.go:220] Checking for updates...
	I0127 12:28:24.396768 1771790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:28:24.397838 1771790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:24.398994 1771790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:24.400097 1771790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:28:24.401134 1771790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:28:24.402362 1771790 config.go:182] Loaded profile config "kubernetes-upgrade-029294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:24.402705 1771790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:24.402819 1771790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:24.418611 1771790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 12:28:24.419071 1771790 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:24.419736 1771790 main.go:141] libmachine: Using API Version  1
	I0127 12:28:24.419762 1771790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:24.420198 1771790 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:24.420426 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:24.420687 1771790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:28:24.421002 1771790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:24.421038 1771790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:24.435479 1771790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0127 12:28:24.435923 1771790 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:24.436475 1771790 main.go:141] libmachine: Using API Version  1
	I0127 12:28:24.436505 1771790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:24.436869 1771790 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:24.437109 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:24.469161 1771790 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:28:24.470416 1771790 start.go:297] selected driver: kvm2
	I0127 12:28:24.470430 1771790 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-up
grade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:24.470554 1771790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:28:24.471285 1771790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:24.471371 1771790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:28:24.486332 1771790 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:28:24.486769 1771790 cni.go:84] Creating CNI manager for ""
	I0127 12:28:24.486844 1771790 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:24.486902 1771790 start.go:340] cluster config:
	{Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:24.487038 1771790 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:24.488499 1771790 out.go:177] * Starting "kubernetes-upgrade-029294" primary control-plane node in "kubernetes-upgrade-029294" cluster
	I0127 12:28:24.489436 1771790 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:24.489478 1771790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:28:24.489490 1771790 cache.go:56] Caching tarball of preloaded images
	I0127 12:28:24.489574 1771790 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:28:24.489584 1771790 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:28:24.489712 1771790 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/config.json ...
	I0127 12:28:24.489924 1771790 start.go:360] acquireMachinesLock for kubernetes-upgrade-029294: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:28:24.489969 1771790 start.go:364] duration metric: took 26.889µs to acquireMachinesLock for "kubernetes-upgrade-029294"
	I0127 12:28:24.489984 1771790 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:28:24.489990 1771790 fix.go:54] fixHost starting: 
	I0127 12:28:24.490253 1771790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:24.490281 1771790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:24.504353 1771790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0127 12:28:24.504763 1771790 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:24.505288 1771790 main.go:141] libmachine: Using API Version  1
	I0127 12:28:24.505310 1771790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:24.505619 1771790 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:24.505834 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:24.505985 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetState
	I0127 12:28:24.507444 1771790 fix.go:112] recreateIfNeeded on kubernetes-upgrade-029294: state=Running err=<nil>
	W0127 12:28:24.507462 1771790 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:28:24.508869 1771790 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-029294" VM ...
	I0127 12:28:24.509838 1771790 machine.go:93] provisionDockerMachine start ...
	I0127 12:28:24.509859 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:24.510054 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:24.512221 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.512530 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.512565 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.512744 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:24.512896 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.513045 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.513208 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:24.513384 1771790 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:24.513616 1771790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:28:24.513629 1771790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:28:24.615346 1771790 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-029294
	
	I0127 12:28:24.615380 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:28:24.615640 1771790 buildroot.go:166] provisioning hostname "kubernetes-upgrade-029294"
	I0127 12:28:24.615671 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:28:24.615845 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:24.618211 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.618633 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.618652 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.618772 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:24.618956 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.619087 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.619243 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:24.619421 1771790 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:24.619597 1771790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:28:24.619610 1771790 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-029294 && echo "kubernetes-upgrade-029294" | sudo tee /etc/hostname
	I0127 12:28:24.738681 1771790 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-029294
	
	I0127 12:28:24.738712 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:24.741637 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.741991 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.742034 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.742163 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:24.742368 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.742501 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.742638 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:24.742843 1771790 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:24.743088 1771790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:28:24.743112 1771790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-029294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-029294/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-029294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:28:24.844296 1771790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:28:24.844330 1771790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:28:24.844389 1771790 buildroot.go:174] setting up certificates
	I0127 12:28:24.844405 1771790 provision.go:84] configureAuth start
	I0127 12:28:24.844427 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetMachineName
	I0127 12:28:24.844714 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:28:24.847261 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.847632 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.847659 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.847845 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:24.850084 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.850430 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.850464 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.850553 1771790 provision.go:143] copyHostCerts
	I0127 12:28:24.850608 1771790 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:28:24.850628 1771790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:28:24.850672 1771790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:28:24.850767 1771790 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:28:24.850780 1771790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:28:24.850802 1771790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:28:24.850858 1771790 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:28:24.850872 1771790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:28:24.850891 1771790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:28:24.850938 1771790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-029294 san=[127.0.0.1 192.168.50.10 kubernetes-upgrade-029294 localhost minikube]
	I0127 12:28:24.951024 1771790 provision.go:177] copyRemoteCerts
	I0127 12:28:24.951075 1771790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:28:24.951097 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:24.953648 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.954008 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:24.954044 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:24.954174 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:24.954364 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:24.954530 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:24.954686 1771790 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:28:25.032563 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:28:25.058755 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 12:28:25.081678 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:28:25.107669 1771790 provision.go:87] duration metric: took 263.24644ms to configureAuth
	I0127 12:28:25.107694 1771790 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:28:25.107877 1771790 config.go:182] Loaded profile config "kubernetes-upgrade-029294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:25.107953 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:25.110492 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:25.110916 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:25.110963 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:25.111087 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:25.111270 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:25.111462 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:25.111618 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:25.111783 1771790 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:25.112014 1771790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:28:25.112030 1771790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:28:25.961257 1771790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:28:25.961292 1771790 machine.go:96] duration metric: took 1.451437672s to provisionDockerMachine
	I0127 12:28:25.961306 1771790 start.go:293] postStartSetup for "kubernetes-upgrade-029294" (driver="kvm2")
	I0127 12:28:25.961321 1771790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:28:25.961341 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:25.961678 1771790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:28:25.961728 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:25.964577 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:25.964961 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:25.964991 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:25.965184 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:25.965375 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:25.965510 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:25.965637 1771790 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:28:26.044609 1771790 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:28:26.048885 1771790 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:28:26.048914 1771790 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:28:26.048979 1771790 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:28:26.049091 1771790 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:28:26.049586 1771790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:28:26.060160 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:28:26.081412 1771790 start.go:296] duration metric: took 120.090548ms for postStartSetup
	I0127 12:28:26.081452 1771790 fix.go:56] duration metric: took 1.59146128s for fixHost
	I0127 12:28:26.081477 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:26.084051 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.084464 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:26.084499 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.084657 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:26.084827 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:26.084985 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:26.085165 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:26.085333 1771790 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:26.085532 1771790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 12:28:26.085545 1771790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:28:26.187802 1771790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980906.145694917
	
	I0127 12:28:26.187829 1771790 fix.go:216] guest clock: 1737980906.145694917
	I0127 12:28:26.187839 1771790 fix.go:229] Guest: 2025-01-27 12:28:26.145694917 +0000 UTC Remote: 2025-01-27 12:28:26.081456566 +0000 UTC m=+1.733236372 (delta=64.238351ms)
	I0127 12:28:26.187881 1771790 fix.go:200] guest clock delta is within tolerance: 64.238351ms
	I0127 12:28:26.187890 1771790 start.go:83] releasing machines lock for "kubernetes-upgrade-029294", held for 1.697911567s
	I0127 12:28:26.187912 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:26.188175 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:28:26.191149 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.191603 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:26.191640 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.191931 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:26.192441 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:26.192656 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .DriverName
	I0127 12:28:26.192759 1771790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:28:26.192803 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:26.192892 1771790 ssh_runner.go:195] Run: cat /version.json
	I0127 12:28:26.192910 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHHostname
	I0127 12:28:26.195477 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.195775 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.195884 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:26.195919 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.196093 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:26.196205 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:26.196245 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:26.196254 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:26.196423 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:26.196445 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHPort
	I0127 12:28:26.196648 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHKeyPath
	I0127 12:28:26.196650 1771790 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:28:26.196778 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetSSHUsername
	I0127 12:28:26.196934 1771790 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/kubernetes-upgrade-029294/id_rsa Username:docker}
	I0127 12:28:26.364563 1771790 ssh_runner.go:195] Run: systemctl --version
	I0127 12:28:26.370116 1771790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:28:26.578283 1771790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:28:26.588175 1771790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:28:26.588259 1771790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:28:26.612771 1771790 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 12:28:26.612796 1771790 start.go:495] detecting cgroup driver to use...
	I0127 12:28:26.612865 1771790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:28:26.649121 1771790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:28:26.679005 1771790 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:28:26.679072 1771790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:28:26.704187 1771790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:28:26.743393 1771790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:28:26.919076 1771790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:28:27.095364 1771790 docker.go:233] disabling docker service ...
	I0127 12:28:27.095440 1771790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:28:27.111759 1771790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:28:27.124673 1771790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:28:27.277982 1771790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:28:27.452646 1771790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:28:27.466661 1771790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:28:27.490586 1771790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:28:27.490656 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.502461 1771790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:28:27.502523 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.513392 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.524273 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.535363 1771790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:28:27.547100 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.559164 1771790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.574022 1771790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:27.588377 1771790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:28:27.601301 1771790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:28:27.615032 1771790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:27.772968 1771790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:28:28.065381 1771790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:28:28.065465 1771790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:28:28.071024 1771790 start.go:563] Will wait 60s for crictl version
	I0127 12:28:28.071080 1771790 ssh_runner.go:195] Run: which crictl
	I0127 12:28:28.075456 1771790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:28:28.107002 1771790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:28:28.107115 1771790 ssh_runner.go:195] Run: crio --version
	I0127 12:28:28.138780 1771790 ssh_runner.go:195] Run: crio --version
	I0127 12:28:28.169728 1771790 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:28:28.170921 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) Calling .GetIP
	I0127 12:28:28.173750 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:28.174098 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5b:38", ip: ""} in network mk-kubernetes-upgrade-029294: {Iface:virbr2 ExpiryTime:2025-01-27 13:27:59 +0000 UTC Type:0 Mac:52:54:00:f9:5b:38 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-029294 Clientid:01:52:54:00:f9:5b:38}
	I0127 12:28:28.174127 1771790 main.go:141] libmachine: (kubernetes-upgrade-029294) DBG | domain kubernetes-upgrade-029294 has defined IP address 192.168.50.10 and MAC address 52:54:00:f9:5b:38 in network mk-kubernetes-upgrade-029294
	I0127 12:28:28.174298 1771790 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 12:28:28.178264 1771790 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-029294 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:28:28.178361 1771790 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:28.178401 1771790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:28:28.220328 1771790 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:28:28.220358 1771790 crio.go:433] Images already preloaded, skipping extraction
	I0127 12:28:28.220421 1771790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:28:28.307388 1771790 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:28:28.307416 1771790 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:28:28.307425 1771790 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.32.1 crio true true} ...
	I0127 12:28:28.307547 1771790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-029294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-029294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:28:28.307636 1771790 ssh_runner.go:195] Run: crio config
	I0127 12:28:28.420224 1771790 cni.go:84] Creating CNI manager for ""
	I0127 12:28:28.420257 1771790 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:28.420272 1771790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:28:28.420306 1771790 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-029294 NodeName:kubernetes-upgrade-029294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:28:28.420505 1771790 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-029294"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:28:28.420597 1771790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:28:28.452040 1771790 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:28:28.452133 1771790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:28:28.481973 1771790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0127 12:28:28.524572 1771790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:28:28.547840 1771790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0127 12:28:28.567910 1771790 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0127 12:28:28.571927 1771790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:28.706702 1771790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:28:28.720952 1771790 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294 for IP: 192.168.50.10
	I0127 12:28:28.720977 1771790 certs.go:194] generating shared ca certs ...
	I0127 12:28:28.720999 1771790 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:28.721215 1771790 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:28:28.721291 1771790 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:28:28.721311 1771790 certs.go:256] generating profile certs ...
	I0127 12:28:28.721405 1771790 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/client.key
	I0127 12:28:28.721452 1771790 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key.bf32c52a
	I0127 12:28:28.721490 1771790 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key
	I0127 12:28:28.721609 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:28:28.721656 1771790 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:28:28.721671 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:28:28.721693 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:28:28.721716 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:28:28.721737 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:28:28.721777 1771790 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:28:28.722403 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:28:28.754038 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:28:28.778573 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:28:28.801946 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:28:28.827356 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 12:28:28.849149 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:28:28.871714 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:28:28.894366 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kubernetes-upgrade-029294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:28:28.916855 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:28:28.939534 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:28:28.962036 1771790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:28:28.984780 1771790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:28:29.004416 1771790 ssh_runner.go:195] Run: openssl version
	I0127 12:28:29.011163 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:28:29.022331 1771790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:29.027611 1771790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:29.027688 1771790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:29.035276 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:28:29.046301 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:28:29.057117 1771790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:28:29.061709 1771790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:28:29.061776 1771790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:28:29.067189 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:28:29.076102 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:28:29.087890 1771790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:28:29.092195 1771790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:28:29.092244 1771790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:28:29.097605 1771790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:28:29.108146 1771790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:28:29.112582 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:28:29.118234 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:28:29.123476 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:28:29.128533 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:28:29.133560 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:28:29.138612 1771790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:28:29.143852 1771790 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-029294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-029294 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:29.143926 1771790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:28:29.143959 1771790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:28:29.178404 1771790 cri.go:89] found id: "15f7a49b8cb5f76694c0bdb6940753f93ad0947163310136b4ebcdac97e6cf0f"
	I0127 12:28:29.178424 1771790 cri.go:89] found id: "0a98b54c2e74c2c14d3542d970bc953db34292c418555cec1a49197fb3f1683b"
	I0127 12:28:29.178428 1771790 cri.go:89] found id: "27edd1a9e7f6e40d09e758341a0cc144d78338ad6c6d1d885c8cc10986f0b12f"
	I0127 12:28:29.178431 1771790 cri.go:89] found id: "b69b493001e6e418706a7ef95559419f01dba863dedcd82ba66188abc98498d2"
	I0127 12:28:29.178433 1771790 cri.go:89] found id: ""
	I0127 12:28:29.178472 1771790 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-029294 -n kubernetes-upgrade-029294
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-029294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-029294 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-029294 describe pod storage-provisioner: exit status 1 (65.627879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-029294 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-029294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-029294
--- FAIL: TestKubernetesUpgrade (376.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.919606444s)

                                                
                                                
-- stdout --
	* [old-k8s-version-488586] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-488586" primary control-plane node in "old-k8s-version-488586" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:27:05.438076 1770976 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:27:05.438425 1770976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:27:05.438535 1770976 out.go:358] Setting ErrFile to fd 2...
	I0127 12:27:05.438557 1770976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:27:05.438973 1770976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:27:05.439870 1770976 out.go:352] Setting JSON to false
	I0127 12:27:05.440906 1770976 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":32966,"bootTime":1737947859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:27:05.441012 1770976 start.go:139] virtualization: kvm guest
	I0127 12:27:05.442928 1770976 out.go:177] * [old-k8s-version-488586] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:27:05.444610 1770976 notify.go:220] Checking for updates...
	I0127 12:27:05.444629 1770976 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:27:05.446153 1770976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:27:05.447629 1770976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:27:05.448863 1770976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:27:05.450122 1770976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:27:05.451562 1770976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:27:05.453257 1770976 config.go:182] Loaded profile config "cert-expiration-103712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:27:05.453369 1770976 config.go:182] Loaded profile config "kubernetes-upgrade-029294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:27:05.453486 1770976 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:27:05.453589 1770976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:27:05.489602 1770976 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:27:05.490879 1770976 start.go:297] selected driver: kvm2
	I0127 12:27:05.490896 1770976 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:27:05.490911 1770976 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:27:05.491836 1770976 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:27:05.491920 1770976 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:27:05.507227 1770976 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:27:05.507283 1770976 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:27:05.507518 1770976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:27:05.507553 1770976 cni.go:84] Creating CNI manager for ""
	I0127 12:27:05.507593 1770976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:27:05.507605 1770976 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:27:05.507652 1770976 start.go:340] cluster config:
	{Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:27:05.507754 1770976 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:27:05.509536 1770976 out.go:177] * Starting "old-k8s-version-488586" primary control-plane node in "old-k8s-version-488586" cluster
	I0127 12:27:05.510874 1770976 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:27:05.510912 1770976 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 12:27:05.510920 1770976 cache.go:56] Caching tarball of preloaded images
	I0127 12:27:05.511000 1770976 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:27:05.511011 1770976 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 12:27:05.511101 1770976 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/config.json ...
	I0127 12:27:05.511124 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/config.json: {Name:mkf6d7a65a132b3ccc4a827f52a92dd78be36442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:05.511249 1770976 start.go:360] acquireMachinesLock for old-k8s-version-488586: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:27:10.634757 1770976 start.go:364] duration metric: took 5.123452777s to acquireMachinesLock for "old-k8s-version-488586"
	I0127 12:27:10.634834 1770976 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-488586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:27:10.634964 1770976 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:27:10.637085 1770976 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:27:10.637264 1770976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:27:10.637330 1770976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:27:10.653811 1770976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I0127 12:27:10.654221 1770976 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:27:10.654762 1770976 main.go:141] libmachine: Using API Version  1
	I0127 12:27:10.654788 1770976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:27:10.655105 1770976 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:27:10.655318 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:27:10.655466 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:10.655611 1770976 start.go:159] libmachine.API.Create for "old-k8s-version-488586" (driver="kvm2")
	I0127 12:27:10.655654 1770976 client.go:168] LocalClient.Create starting
	I0127 12:27:10.655683 1770976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:27:10.655723 1770976 main.go:141] libmachine: Decoding PEM data...
	I0127 12:27:10.655739 1770976 main.go:141] libmachine: Parsing certificate...
	I0127 12:27:10.655798 1770976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:27:10.655817 1770976 main.go:141] libmachine: Decoding PEM data...
	I0127 12:27:10.655831 1770976 main.go:141] libmachine: Parsing certificate...
	I0127 12:27:10.655852 1770976 main.go:141] libmachine: Running pre-create checks...
	I0127 12:27:10.655861 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .PreCreateCheck
	I0127 12:27:10.656191 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetConfigRaw
	I0127 12:27:10.656610 1770976 main.go:141] libmachine: Creating machine...
	I0127 12:27:10.656627 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .Create
	I0127 12:27:10.656784 1770976 main.go:141] libmachine: (old-k8s-version-488586) creating KVM machine...
	I0127 12:27:10.656800 1770976 main.go:141] libmachine: (old-k8s-version-488586) creating network...
	I0127 12:27:10.658137 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found existing default KVM network
	I0127 12:27:10.660098 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:10.659912 1771043 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266180}
	I0127 12:27:10.660129 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | created network xml: 
	I0127 12:27:10.660144 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | <network>
	I0127 12:27:10.660158 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   <name>mk-old-k8s-version-488586</name>
	I0127 12:27:10.660167 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   <dns enable='no'/>
	I0127 12:27:10.660174 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   
	I0127 12:27:10.660183 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 12:27:10.660190 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |     <dhcp>
	I0127 12:27:10.660226 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 12:27:10.660253 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |     </dhcp>
	I0127 12:27:10.660270 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   </ip>
	I0127 12:27:10.660288 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG |   
	I0127 12:27:10.660299 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | </network>
	I0127 12:27:10.660307 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | 
	I0127 12:27:10.665482 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | trying to create private KVM network mk-old-k8s-version-488586 192.168.39.0/24...
	I0127 12:27:10.738813 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586 ...
	I0127 12:27:10.738874 1770976 main.go:141] libmachine: (old-k8s-version-488586) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:27:10.738885 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | private KVM network mk-old-k8s-version-488586 192.168.39.0/24 created
	I0127 12:27:10.738903 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:10.738724 1771043 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:27:10.738966 1770976 main.go:141] libmachine: (old-k8s-version-488586) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:27:11.010083 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:11.009912 1771043 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa...
	I0127 12:27:11.112707 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:11.112556 1771043 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/old-k8s-version-488586.rawdisk...
	I0127 12:27:11.112758 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | Writing magic tar header
	I0127 12:27:11.112802 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | Writing SSH key tar header
	I0127 12:27:11.112855 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:11.112720 1771043 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586 ...
	I0127 12:27:11.112889 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586 (perms=drwx------)
	I0127 12:27:11.112913 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:27:11.112927 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586
	I0127 12:27:11.112945 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:27:11.112958 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:27:11.112976 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:27:11.112989 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:27:11.113000 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home/jenkins
	I0127 12:27:11.113013 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:27:11.113027 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | checking permissions on dir: /home
	I0127 12:27:11.113041 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | skipping /home - not owner
	I0127 12:27:11.113056 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:27:11.113067 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:27:11.113078 1770976 main.go:141] libmachine: (old-k8s-version-488586) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:27:11.113087 1770976 main.go:141] libmachine: (old-k8s-version-488586) creating domain...
	I0127 12:27:11.114300 1770976 main.go:141] libmachine: (old-k8s-version-488586) define libvirt domain using xml: 
	I0127 12:27:11.114326 1770976 main.go:141] libmachine: (old-k8s-version-488586) <domain type='kvm'>
	I0127 12:27:11.114347 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <name>old-k8s-version-488586</name>
	I0127 12:27:11.114367 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <memory unit='MiB'>2200</memory>
	I0127 12:27:11.114380 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <vcpu>2</vcpu>
	I0127 12:27:11.114387 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <features>
	I0127 12:27:11.114402 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <acpi/>
	I0127 12:27:11.114411 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <apic/>
	I0127 12:27:11.114422 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <pae/>
	I0127 12:27:11.114430 1770976 main.go:141] libmachine: (old-k8s-version-488586)     
	I0127 12:27:11.114441 1770976 main.go:141] libmachine: (old-k8s-version-488586)   </features>
	I0127 12:27:11.114451 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <cpu mode='host-passthrough'>
	I0127 12:27:11.114457 1770976 main.go:141] libmachine: (old-k8s-version-488586)   
	I0127 12:27:11.114465 1770976 main.go:141] libmachine: (old-k8s-version-488586)   </cpu>
	I0127 12:27:11.114480 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <os>
	I0127 12:27:11.114509 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <type>hvm</type>
	I0127 12:27:11.114550 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <boot dev='cdrom'/>
	I0127 12:27:11.114564 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <boot dev='hd'/>
	I0127 12:27:11.114572 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <bootmenu enable='no'/>
	I0127 12:27:11.114583 1770976 main.go:141] libmachine: (old-k8s-version-488586)   </os>
	I0127 12:27:11.114589 1770976 main.go:141] libmachine: (old-k8s-version-488586)   <devices>
	I0127 12:27:11.114601 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <disk type='file' device='cdrom'>
	I0127 12:27:11.114615 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/boot2docker.iso'/>
	I0127 12:27:11.114631 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <target dev='hdc' bus='scsi'/>
	I0127 12:27:11.114643 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <readonly/>
	I0127 12:27:11.114656 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </disk>
	I0127 12:27:11.114668 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <disk type='file' device='disk'>
	I0127 12:27:11.114681 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:27:11.114699 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/old-k8s-version-488586.rawdisk'/>
	I0127 12:27:11.114710 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <target dev='hda' bus='virtio'/>
	I0127 12:27:11.114766 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </disk>
	I0127 12:27:11.114786 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <interface type='network'>
	I0127 12:27:11.114796 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <source network='mk-old-k8s-version-488586'/>
	I0127 12:27:11.114802 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <model type='virtio'/>
	I0127 12:27:11.114809 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </interface>
	I0127 12:27:11.114824 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <interface type='network'>
	I0127 12:27:11.114833 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <source network='default'/>
	I0127 12:27:11.114839 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <model type='virtio'/>
	I0127 12:27:11.114847 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </interface>
	I0127 12:27:11.114853 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <serial type='pty'>
	I0127 12:27:11.114862 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <target port='0'/>
	I0127 12:27:11.114868 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </serial>
	I0127 12:27:11.114875 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <console type='pty'>
	I0127 12:27:11.114882 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <target type='serial' port='0'/>
	I0127 12:27:11.114890 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </console>
	I0127 12:27:11.114896 1770976 main.go:141] libmachine: (old-k8s-version-488586)     <rng model='virtio'>
	I0127 12:27:11.114905 1770976 main.go:141] libmachine: (old-k8s-version-488586)       <backend model='random'>/dev/random</backend>
	I0127 12:27:11.114916 1770976 main.go:141] libmachine: (old-k8s-version-488586)     </rng>
	I0127 12:27:11.114926 1770976 main.go:141] libmachine: (old-k8s-version-488586)     
	I0127 12:27:11.114932 1770976 main.go:141] libmachine: (old-k8s-version-488586)     
	I0127 12:27:11.114939 1770976 main.go:141] libmachine: (old-k8s-version-488586)   </devices>
	I0127 12:27:11.114945 1770976 main.go:141] libmachine: (old-k8s-version-488586) </domain>
	I0127 12:27:11.114956 1770976 main.go:141] libmachine: (old-k8s-version-488586) 
	I0127 12:27:11.119875 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:24:a1:6a in network default
	I0127 12:27:11.120433 1770976 main.go:141] libmachine: (old-k8s-version-488586) starting domain...
	I0127 12:27:11.120459 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:11.120468 1770976 main.go:141] libmachine: (old-k8s-version-488586) ensuring networks are active...
	I0127 12:27:11.121080 1770976 main.go:141] libmachine: (old-k8s-version-488586) Ensuring network default is active
	I0127 12:27:11.121415 1770976 main.go:141] libmachine: (old-k8s-version-488586) Ensuring network mk-old-k8s-version-488586 is active
	I0127 12:27:11.121877 1770976 main.go:141] libmachine: (old-k8s-version-488586) getting domain XML...
	I0127 12:27:11.122576 1770976 main.go:141] libmachine: (old-k8s-version-488586) creating domain...
	I0127 12:27:12.463149 1770976 main.go:141] libmachine: (old-k8s-version-488586) waiting for IP...
	I0127 12:27:12.464293 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:12.465212 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:12.465280 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:12.465234 1771043 retry.go:31] will retry after 303.632691ms: waiting for domain to come up
	I0127 12:27:12.771288 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:12.772020 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:12.772050 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:12.771969 1771043 retry.go:31] will retry after 338.688507ms: waiting for domain to come up
	I0127 12:27:13.112682 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:13.113409 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:13.113443 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:13.113360 1771043 retry.go:31] will retry after 451.376811ms: waiting for domain to come up
	I0127 12:27:13.566217 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:13.566773 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:13.566826 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:13.566774 1771043 retry.go:31] will retry after 457.1462ms: waiting for domain to come up
	I0127 12:27:14.025409 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:14.025949 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:14.026020 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:14.025904 1771043 retry.go:31] will retry after 531.632809ms: waiting for domain to come up
	I0127 12:27:14.559820 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:14.560410 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:14.560442 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:14.560375 1771043 retry.go:31] will retry after 640.465313ms: waiting for domain to come up
	I0127 12:27:15.202287 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:15.202954 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:15.203016 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:15.202915 1771043 retry.go:31] will retry after 1.076628926s: waiting for domain to come up
	I0127 12:27:16.280896 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:16.281388 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:16.281418 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:16.281368 1771043 retry.go:31] will retry after 1.472339447s: waiting for domain to come up
	I0127 12:27:17.755889 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:17.756389 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:17.756419 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:17.756323 1771043 retry.go:31] will retry after 1.231311101s: waiting for domain to come up
	I0127 12:27:18.990183 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:18.990894 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:18.990957 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:18.990871 1771043 retry.go:31] will retry after 2.279706723s: waiting for domain to come up
	I0127 12:27:21.271968 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:21.272596 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:21.272635 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:21.272551 1771043 retry.go:31] will retry after 2.797483584s: waiting for domain to come up
	I0127 12:27:24.072093 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:24.072552 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:24.072582 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:24.072508 1771043 retry.go:31] will retry after 2.779322033s: waiting for domain to come up
	I0127 12:27:26.853728 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:26.854193 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:26.854222 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:26.854165 1771043 retry.go:31] will retry after 2.868198629s: waiting for domain to come up
	I0127 12:27:29.724050 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:29.724493 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:27:29.724517 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:27:29.724441 1771043 retry.go:31] will retry after 4.832812017s: waiting for domain to come up
	I0127 12:27:34.562050 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.562573 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has current primary IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.562596 1770976 main.go:141] libmachine: (old-k8s-version-488586) found domain IP: 192.168.39.109
	I0127 12:27:34.562649 1770976 main.go:141] libmachine: (old-k8s-version-488586) reserving static IP address...
	I0127 12:27:34.563047 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-488586", mac: "52:54:00:ec:6f:18", ip: "192.168.39.109"} in network mk-old-k8s-version-488586
	I0127 12:27:34.636185 1770976 main.go:141] libmachine: (old-k8s-version-488586) reserved static IP address 192.168.39.109 for domain old-k8s-version-488586
	I0127 12:27:34.636223 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | Getting to WaitForSSH function...
	I0127 12:27:34.636248 1770976 main.go:141] libmachine: (old-k8s-version-488586) waiting for SSH...
	I0127 12:27:34.639000 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.639390 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:34.639424 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.639570 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | Using SSH client type: external
	I0127 12:27:34.639615 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa (-rw-------)
	I0127 12:27:34.639657 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:27:34.639677 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | About to run SSH command:
	I0127 12:27:34.639697 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | exit 0
	I0127 12:27:34.767123 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | SSH cmd err, output: <nil>: 
	I0127 12:27:34.767387 1770976 main.go:141] libmachine: (old-k8s-version-488586) KVM machine creation complete
	I0127 12:27:34.767709 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetConfigRaw
	I0127 12:27:34.768307 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:34.768541 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:34.768768 1770976 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:27:34.768787 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetState
	I0127 12:27:34.770500 1770976 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:27:34.770519 1770976 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:27:34.770526 1770976 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:27:34.770534 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:34.773257 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.773677 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:34.773716 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.773797 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:34.773968 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.774122 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.774289 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:34.774468 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:34.774757 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:34.774775 1770976 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:27:34.877622 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:27:34.877651 1770976 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:27:34.877662 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:34.880359 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.880721 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:34.880742 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.880927 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:34.881171 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.881381 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.881532 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:34.881715 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:34.881884 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:34.881895 1770976 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:27:34.986998 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:27:34.987118 1770976 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:27:34.987135 1770976 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:27:34.987152 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:27:34.987391 1770976 buildroot.go:166] provisioning hostname "old-k8s-version-488586"
	I0127 12:27:34.987427 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:27:34.987593 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:34.990121 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.990454 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:34.990492 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:34.990601 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:34.990807 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.990990 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:34.991126 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:34.991288 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:34.991477 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:34.991489 1770976 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-488586 && echo "old-k8s-version-488586" | sudo tee /etc/hostname
	I0127 12:27:35.111174 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-488586
	
	I0127 12:27:35.111208 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.113926 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.114307 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.114340 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.114498 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:35.114692 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.114892 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.115044 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:35.115222 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:35.115446 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:35.115479 1770976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-488586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-488586/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-488586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:27:35.227272 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:27:35.227306 1770976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:27:35.227328 1770976 buildroot.go:174] setting up certificates
	I0127 12:27:35.227342 1770976 provision.go:84] configureAuth start
	I0127 12:27:35.227356 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:27:35.227661 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:27:35.230564 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.230961 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.230987 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.231140 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.233468 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.233829 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.233854 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.233980 1770976 provision.go:143] copyHostCerts
	I0127 12:27:35.234047 1770976 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:27:35.234066 1770976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:27:35.234123 1770976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:27:35.234291 1770976 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:27:35.234304 1770976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:27:35.234332 1770976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:27:35.234407 1770976 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:27:35.234419 1770976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:27:35.234448 1770976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:27:35.234519 1770976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-488586 san=[127.0.0.1 192.168.39.109 localhost minikube old-k8s-version-488586]
	I0127 12:27:35.448756 1770976 provision.go:177] copyRemoteCerts
	I0127 12:27:35.448816 1770976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:27:35.448848 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.451401 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.451768 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.451800 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.451932 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:35.452134 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.452320 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:35.452454 1770976 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:27:35.536609 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 12:27:35.561217 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:27:35.584506 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:27:35.607448 1770976 provision.go:87] duration metric: took 380.092009ms to configureAuth
	I0127 12:27:35.607473 1770976 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:27:35.607635 1770976 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:27:35.607714 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.610559 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.610884 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.610915 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.611059 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:35.611262 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.611406 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.611526 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:35.611667 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:35.611835 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:35.611852 1770976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:27:35.833662 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:27:35.833696 1770976 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:27:35.833708 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetURL
	I0127 12:27:35.835069 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | using libvirt version 6000000
	I0127 12:27:35.837220 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.837538 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.837579 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.837719 1770976 main.go:141] libmachine: Docker is up and running!
	I0127 12:27:35.837735 1770976 main.go:141] libmachine: Reticulating splines...
	I0127 12:27:35.837745 1770976 client.go:171] duration metric: took 25.182078423s to LocalClient.Create
	I0127 12:27:35.837778 1770976 start.go:167] duration metric: took 25.182168531s to libmachine.API.Create "old-k8s-version-488586"
	I0127 12:27:35.837791 1770976 start.go:293] postStartSetup for "old-k8s-version-488586" (driver="kvm2")
	I0127 12:27:35.837821 1770976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:27:35.837863 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:35.838081 1770976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:27:35.838108 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.840382 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.840677 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.840705 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.840866 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:35.841038 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.841184 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:35.841304 1770976 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:27:35.920722 1770976 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:27:35.924568 1770976 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:27:35.924594 1770976 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:27:35.924663 1770976 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:27:35.924779 1770976 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:27:35.924901 1770976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:27:35.933607 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:27:35.957295 1770976 start.go:296] duration metric: took 119.488812ms for postStartSetup
	I0127 12:27:35.957341 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetConfigRaw
	I0127 12:27:35.957947 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:27:35.960341 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.960703 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.960729 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.961018 1770976 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/config.json ...
	I0127 12:27:35.961234 1770976 start.go:128] duration metric: took 25.326254936s to createHost
	I0127 12:27:35.961283 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:35.963475 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.963769 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:35.963809 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:35.963904 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:35.964083 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.964263 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:35.964412 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:35.964576 1770976 main.go:141] libmachine: Using SSH client type: native
	I0127 12:27:35.964779 1770976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:27:35.964794 1770976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:27:36.066994 1770976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980856.044669232
	
	I0127 12:27:36.067020 1770976 fix.go:216] guest clock: 1737980856.044669232
	I0127 12:27:36.067026 1770976 fix.go:229] Guest: 2025-01-27 12:27:36.044669232 +0000 UTC Remote: 2025-01-27 12:27:35.961267685 +0000 UTC m=+30.561336550 (delta=83.401547ms)
	I0127 12:27:36.067064 1770976 fix.go:200] guest clock delta is within tolerance: 83.401547ms
	I0127 12:27:36.067069 1770976 start.go:83] releasing machines lock for "old-k8s-version-488586", held for 25.432272605s
	I0127 12:27:36.067091 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:36.067348 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:27:36.070197 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.070575 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:36.070608 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.070791 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:36.071284 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:36.071463 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:27:36.071560 1770976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:27:36.071616 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:36.071664 1770976 ssh_runner.go:195] Run: cat /version.json
	I0127 12:27:36.071687 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:27:36.074131 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.074261 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.074458 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:36.074489 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.074666 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:36.074771 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:36.074803 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:36.074962 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:27:36.074966 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:36.075128 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:36.075135 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:27:36.075320 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:27:36.075314 1770976 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:27:36.075447 1770976 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:27:36.151384 1770976 ssh_runner.go:195] Run: systemctl --version
	I0127 12:27:36.178041 1770976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:27:36.333167 1770976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:27:36.339206 1770976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:27:36.339296 1770976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:27:36.354538 1770976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:27:36.354563 1770976 start.go:495] detecting cgroup driver to use...
	I0127 12:27:36.354630 1770976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:27:36.373549 1770976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:27:36.387110 1770976 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:27:36.387171 1770976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:27:36.399258 1770976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:27:36.411349 1770976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:27:36.524610 1770976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:27:36.681563 1770976 docker.go:233] disabling docker service ...
	I0127 12:27:36.681643 1770976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:27:36.695319 1770976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:27:36.708063 1770976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:27:36.836287 1770976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:27:36.967333 1770976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:27:36.980890 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:27:36.998932 1770976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 12:27:36.998993 1770976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:27:37.010244 1770976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:27:37.010299 1770976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:27:37.019276 1770976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:27:37.028749 1770976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:27:37.038606 1770976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:27:37.048713 1770976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:27:37.057150 1770976 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:27:37.057198 1770976 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:27:37.069769 1770976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:27:37.078107 1770976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:27:37.205401 1770976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:27:37.292357 1770976 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:27:37.292455 1770976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:27:37.297203 1770976 start.go:563] Will wait 60s for crictl version
	I0127 12:27:37.297356 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:37.300828 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:27:37.341365 1770976 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:27:37.341458 1770976 ssh_runner.go:195] Run: crio --version
	I0127 12:27:37.370007 1770976 ssh_runner.go:195] Run: crio --version
	I0127 12:27:37.395899 1770976 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 12:27:37.396935 1770976 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:27:37.399576 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:37.399903 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:27:37.399934 1770976 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:27:37.400102 1770976 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:27:37.403711 1770976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:27:37.415078 1770976 kubeadm.go:883] updating cluster {Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:27:37.415183 1770976 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:27:37.415230 1770976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:27:37.445483 1770976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:27:37.445539 1770976 ssh_runner.go:195] Run: which lz4
	I0127 12:27:37.449139 1770976 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:27:37.453277 1770976 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:27:37.453306 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 12:27:38.871500 1770976 crio.go:462] duration metric: took 1.422389925s to copy over tarball
	I0127 12:27:38.871596 1770976 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:27:41.333437 1770976 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.461774708s)
	I0127 12:27:41.333492 1770976 crio.go:469] duration metric: took 2.461953944s to extract the tarball
	I0127 12:27:41.333503 1770976 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:27:41.375223 1770976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:27:41.415208 1770976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:27:41.415236 1770976 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 12:27:41.415317 1770976 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:27:41.415372 1770976 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 12:27:41.415404 1770976 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 12:27:41.415411 1770976 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:41.415323 1770976 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.415323 1770976 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.415568 1770976 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:41.416310 1770976 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.417943 1770976 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:41.417963 1770976 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.418218 1770976 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:27:41.418228 1770976 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.418368 1770976 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.418403 1770976 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 12:27:41.418453 1770976 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:41.418711 1770976 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 12:27:41.615162 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 12:27:41.640702 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.640904 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:41.646012 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.656508 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.659609 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:41.671175 1770976 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 12:27:41.671243 1770976 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 12:27:41.671291 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.679524 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 12:27:41.748048 1770976 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 12:27:41.748107 1770976 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:41.748125 1770976 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 12:27:41.748161 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.748165 1770976 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.748212 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.772186 1770976 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 12:27:41.772239 1770976 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.772296 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.788040 1770976 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 12:27:41.788078 1770976 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.788122 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.790852 1770976 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 12:27:41.790893 1770976 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:41.790905 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:27:41.790928 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.805624 1770976 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 12:27:41.805651 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:41.805674 1770976 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 12:27:41.805684 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.805714 1770976 ssh_runner.go:195] Run: which crictl
	I0127 12:27:41.805651 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.805729 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.844549 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:27:41.844638 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:41.844639 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:27:41.961296 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:41.961350 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:41.961350 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:41.961478 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:42.008527 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:27:42.008532 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:27:42.008703 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:42.088686 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:27:42.088743 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:27:42.088766 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:27:42.088742 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:27:42.092742 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 12:27:42.139130 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:27:42.139136 1770976 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:27:42.217193 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 12:27:42.217232 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 12:27:42.217301 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 12:27:42.217316 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 12:27:42.233926 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 12:27:42.234156 1770976 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 12:27:42.677053 1770976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:27:42.817586 1770976 cache_images.go:92] duration metric: took 1.402321008s to LoadCachedImages
	W0127 12:27:42.817719 1770976 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0127 12:27:42.817743 1770976 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.20.0 crio true true} ...
	I0127 12:27:42.817883 1770976 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-488586 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:27:42.817970 1770976 ssh_runner.go:195] Run: crio config
	I0127 12:27:42.865911 1770976 cni.go:84] Creating CNI manager for ""
	I0127 12:27:42.865937 1770976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:27:42.865947 1770976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:27:42.865972 1770976 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-488586 NodeName:old-k8s-version-488586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 12:27:42.866142 1770976 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-488586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:27:42.866222 1770976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 12:27:42.875602 1770976 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:27:42.875677 1770976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:27:42.884233 1770976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 12:27:42.899627 1770976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:27:42.914373 1770976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 12:27:42.930410 1770976 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0127 12:27:42.933783 1770976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:27:42.944883 1770976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:27:43.075250 1770976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:27:43.092364 1770976 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586 for IP: 192.168.39.109
	I0127 12:27:43.092391 1770976 certs.go:194] generating shared ca certs ...
	I0127 12:27:43.092417 1770976 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.092627 1770976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:27:43.092683 1770976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:27:43.092699 1770976 certs.go:256] generating profile certs ...
	I0127 12:27:43.092792 1770976 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.key
	I0127 12:27:43.092818 1770976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.crt with IP's: []
	I0127 12:27:43.274465 1770976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.crt ...
	I0127 12:27:43.274510 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.crt: {Name:mk2e8fd70c6bea0fb8bb2489f6227fbbcf40e034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.274668 1770976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.key ...
	I0127 12:27:43.274681 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.key: {Name:mkc2ad028b57161b28f0421cc3568ce1827fd4e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.274811 1770976 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key.1691d3b4
	I0127 12:27:43.274847 1770976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt.1691d3b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109]
	I0127 12:27:43.430103 1770976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt.1691d3b4 ...
	I0127 12:27:43.430138 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt.1691d3b4: {Name:mka0e32ab76b0951439ac625eb06abf6313d0c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.430293 1770976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key.1691d3b4 ...
	I0127 12:27:43.430309 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key.1691d3b4: {Name:mka9ca55c6b3b6e177e540ba296783b264091cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.430378 1770976 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt.1691d3b4 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt
	I0127 12:27:43.430476 1770976 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key.1691d3b4 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key
	I0127 12:27:43.430536 1770976 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key
	I0127 12:27:43.430553 1770976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.crt with IP's: []
	I0127 12:27:43.508151 1770976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.crt ...
	I0127 12:27:43.508182 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.crt: {Name:mk305f9c2781d932f1ef79717da55fade9026c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.508332 1770976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key ...
	I0127 12:27:43.508346 1770976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key: {Name:mkd14196853b70c08530e03669b47e713df1a1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:27:43.508509 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:27:43.508544 1770976 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:27:43.508555 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:27:43.508580 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:27:43.508602 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:27:43.508622 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:27:43.508661 1770976 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:27:43.509364 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:27:43.535723 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:27:43.557026 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:27:43.580574 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:27:43.602437 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 12:27:43.625013 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:27:43.646583 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:27:43.668544 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:27:43.690894 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:27:43.715517 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:27:43.738324 1770976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:27:43.763298 1770976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:27:43.778909 1770976 ssh_runner.go:195] Run: openssl version
	I0127 12:27:43.784519 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:27:43.794698 1770976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:27:43.799216 1770976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:27:43.799268 1770976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:27:43.804930 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:27:43.814950 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:27:43.825533 1770976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:27:43.830123 1770976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:27:43.830180 1770976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:27:43.835535 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:27:43.845830 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:27:43.856200 1770976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:27:43.860399 1770976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:27:43.860458 1770976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:27:43.865797 1770976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:27:43.876393 1770976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:27:43.880274 1770976 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:27:43.880349 1770976 kubeadm.go:392] StartCluster: {Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:27:43.880467 1770976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:27:43.880522 1770976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:27:43.927654 1770976 cri.go:89] found id: ""
	I0127 12:27:43.927747 1770976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:27:43.939052 1770976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:27:43.950134 1770976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:27:43.963589 1770976 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:27:43.963609 1770976 kubeadm.go:157] found existing configuration files:
	
	I0127 12:27:43.963661 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:27:43.974776 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:27:43.974846 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:27:43.984438 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:27:43.995278 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:27:43.995359 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:27:44.007773 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:27:44.016276 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:27:44.016346 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:27:44.025064 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:27:44.034167 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:27:44.034230 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:27:44.043441 1770976 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:27:44.160446 1770976 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:27:44.160527 1770976 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:27:44.301280 1770976 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:27:44.301375 1770976 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:27:44.301479 1770976 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:27:44.470506 1770976 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:27:44.561141 1770976 out.go:235]   - Generating certificates and keys ...
	I0127 12:27:44.561274 1770976 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:27:44.561395 1770976 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:27:44.571578 1770976 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:27:44.659849 1770976 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:27:44.887957 1770976 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:27:45.055226 1770976 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:27:45.354290 1770976 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:27:45.354586 1770976 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0127 12:27:45.671706 1770976 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:27:45.672016 1770976 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0127 12:27:45.966114 1770976 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:27:46.099035 1770976 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:27:46.263275 1770976 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:27:46.263363 1770976 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:27:46.522920 1770976 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:27:46.712497 1770976 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:27:46.906514 1770976 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:27:47.076611 1770976 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:27:47.092742 1770976 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:27:47.093276 1770976 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:27:47.093347 1770976 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:27:47.215653 1770976 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:27:47.217287 1770976 out.go:235]   - Booting up control plane ...
	I0127 12:27:47.217407 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:27:47.222733 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:27:47.231615 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:27:47.232900 1770976 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:27:47.239013 1770976 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:28:27.233065 1770976 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:28:27.233611 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:28:27.233824 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:28:32.234122 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:28:32.234405 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:28:42.233510 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:28:42.233816 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:29:02.232965 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:29:02.233193 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:29:42.234908 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:29:42.235523 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:29:42.235545 1770976 kubeadm.go:310] 
	I0127 12:29:42.235636 1770976 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:29:42.235762 1770976 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:29:42.235785 1770976 kubeadm.go:310] 
	I0127 12:29:42.235883 1770976 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:29:42.235962 1770976 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:29:42.236218 1770976 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:29:42.236238 1770976 kubeadm.go:310] 
	I0127 12:29:42.236509 1770976 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:29:42.236600 1770976 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:29:42.236714 1770976 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:29:42.236745 1770976 kubeadm.go:310] 
	I0127 12:29:42.237025 1770976 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:29:42.237253 1770976 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:29:42.237277 1770976 kubeadm.go:310] 
	I0127 12:29:42.237514 1770976 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:29:42.237796 1770976 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:29:42.238287 1770976 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:29:42.238455 1770976 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:29:42.238490 1770976 kubeadm.go:310] 
	I0127 12:29:42.238724 1770976 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:29:42.239047 1770976 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:29:42.239605 1770976 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 12:29:42.239752 1770976 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-488586] and IPs [192.168.39.109 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 12:29:42.239807 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:29:42.718386 1770976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:29:42.737559 1770976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:29:42.751657 1770976 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:29:42.751683 1770976 kubeadm.go:157] found existing configuration files:
	
	I0127 12:29:42.751734 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:29:42.763538 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:29:42.763620 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:29:42.776456 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:29:42.788680 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:29:42.788756 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:29:42.801677 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:29:42.815190 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:29:42.815250 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:29:42.827374 1770976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:29:42.836173 1770976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:29:42.836238 1770976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:29:42.845509 1770976 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:29:42.928834 1770976 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:29:42.929031 1770976 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:29:43.118934 1770976 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:29:43.119097 1770976 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:29:43.119281 1770976 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:29:43.308383 1770976 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:29:43.310313 1770976 out.go:235]   - Generating certificates and keys ...
	I0127 12:29:43.310489 1770976 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:29:43.310635 1770976 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:29:43.310794 1770976 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:29:43.310905 1770976 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:29:43.311014 1770976 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:29:43.311089 1770976 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:29:43.311292 1770976 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:29:43.311783 1770976 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:29:43.312228 1770976 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:29:43.312629 1770976 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:29:43.312799 1770976 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:29:43.312921 1770976 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:29:43.602725 1770976 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:29:43.857997 1770976 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:29:44.379554 1770976 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:29:44.494805 1770976 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:29:44.526062 1770976 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:29:44.528126 1770976 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:29:44.528223 1770976 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:29:44.719923 1770976 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:29:44.722034 1770976 out.go:235]   - Booting up control plane ...
	I0127 12:29:44.722151 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:29:44.738488 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:29:44.739892 1770976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:29:44.740896 1770976 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:29:44.746347 1770976 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:30:24.748343 1770976 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:30:24.748878 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:30:24.749150 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:30:29.749719 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:30:29.749994 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:30:39.750623 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:30:39.750931 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:30:59.749897 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:30:59.750112 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:31:39.750380 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:31:39.750616 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:31:39.750630 1770976 kubeadm.go:310] 
	I0127 12:31:39.750663 1770976 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:31:39.750695 1770976 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:31:39.750702 1770976 kubeadm.go:310] 
	I0127 12:31:39.750738 1770976 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:31:39.750813 1770976 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:31:39.750961 1770976 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:31:39.750972 1770976 kubeadm.go:310] 
	I0127 12:31:39.751059 1770976 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:31:39.751090 1770976 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:31:39.751122 1770976 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:31:39.751129 1770976 kubeadm.go:310] 
	I0127 12:31:39.751206 1770976 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:31:39.751271 1770976 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:31:39.751277 1770976 kubeadm.go:310] 
	I0127 12:31:39.751385 1770976 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:31:39.751454 1770976 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:31:39.751522 1770976 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:31:39.751583 1770976 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:31:39.751591 1770976 kubeadm.go:310] 
	I0127 12:31:39.752431 1770976 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:31:39.752532 1770976 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:31:39.752597 1770976 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 12:31:39.752675 1770976 kubeadm.go:394] duration metric: took 3m55.872330445s to StartCluster
	I0127 12:31:39.752734 1770976 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:31:39.752805 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:31:39.794978 1770976 cri.go:89] found id: ""
	I0127 12:31:39.795006 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.795016 1770976 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:31:39.795022 1770976 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:31:39.795089 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:31:39.830171 1770976 cri.go:89] found id: ""
	I0127 12:31:39.830206 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.830216 1770976 logs.go:284] No container was found matching "etcd"
	I0127 12:31:39.830222 1770976 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:31:39.830276 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:31:39.861816 1770976 cri.go:89] found id: ""
	I0127 12:31:39.861849 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.861858 1770976 logs.go:284] No container was found matching "coredns"
	I0127 12:31:39.861869 1770976 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:31:39.861924 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:31:39.894638 1770976 cri.go:89] found id: ""
	I0127 12:31:39.894666 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.894674 1770976 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:31:39.894680 1770976 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:31:39.894752 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:31:39.939147 1770976 cri.go:89] found id: ""
	I0127 12:31:39.939179 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.939191 1770976 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:31:39.939200 1770976 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:31:39.939273 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:31:39.973139 1770976 cri.go:89] found id: ""
	I0127 12:31:39.973162 1770976 logs.go:282] 0 containers: []
	W0127 12:31:39.973179 1770976 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:31:39.973186 1770976 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:31:39.973232 1770976 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:31:40.004379 1770976 cri.go:89] found id: ""
	I0127 12:31:40.004405 1770976 logs.go:282] 0 containers: []
	W0127 12:31:40.004413 1770976 logs.go:284] No container was found matching "kindnet"
	I0127 12:31:40.004424 1770976 logs.go:123] Gathering logs for container status ...
	I0127 12:31:40.004437 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:31:40.038601 1770976 logs.go:123] Gathering logs for kubelet ...
	I0127 12:31:40.038631 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:31:40.089297 1770976 logs.go:123] Gathering logs for dmesg ...
	I0127 12:31:40.089325 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:31:40.101279 1770976 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:31:40.101304 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:31:40.201806 1770976 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:31:40.201833 1770976 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:31:40.201847 1770976 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 12:31:40.302713 1770976 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 12:31:40.302792 1770976 out.go:270] * 
	* 
	W0127 12:31:40.302875 1770976 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:31:40.302893 1770976 out.go:270] * 
	* 
	W0127 12:31:40.303788 1770976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:31:40.306389 1770976 out.go:201] 
	W0127 12:31:40.307421 1770976 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:31:40.307466 1770976 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 12:31:40.307495 1770976 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 12:31:40.308654 1770976 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 6 (234.885137ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:31:40.589064 1774217 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-488586" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488586" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-502641 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-502641 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.714430307s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-502641] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-502641" primary control-plane node in "pause-502641" cluster
	* Updating the running kvm2 "pause-502641" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-502641" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:28:08.303795 1771581 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:28:08.304323 1771581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:08.304384 1771581 out.go:358] Setting ErrFile to fd 2...
	I0127 12:28:08.304403 1771581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:08.304890 1771581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:28:08.305739 1771581 out.go:352] Setting JSON to false
	I0127 12:28:08.306898 1771581 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33029,"bootTime":1737947859,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:28:08.307019 1771581 start.go:139] virtualization: kvm guest
	I0127 12:28:08.308924 1771581 out.go:177] * [pause-502641] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:28:08.310279 1771581 notify.go:220] Checking for updates...
	I0127 12:28:08.310330 1771581 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:28:08.311433 1771581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:28:08.312632 1771581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:08.313718 1771581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:08.314705 1771581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:28:08.315747 1771581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:28:08.317168 1771581 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:08.317662 1771581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:08.317709 1771581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:08.335129 1771581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0127 12:28:08.335579 1771581 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:08.336115 1771581 main.go:141] libmachine: Using API Version  1
	I0127 12:28:08.336156 1771581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:08.336606 1771581 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:08.336830 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:08.337079 1771581 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:28:08.337388 1771581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:08.337442 1771581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:08.352374 1771581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0127 12:28:08.352789 1771581 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:08.353290 1771581 main.go:141] libmachine: Using API Version  1
	I0127 12:28:08.353312 1771581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:08.353660 1771581 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:08.353858 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:08.388627 1771581 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:28:08.389678 1771581 start.go:297] selected driver: kvm2
	I0127 12:28:08.389706 1771581 start.go:901] validating driver "kvm2" against &{Name:pause-502641 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-502641 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:08.389848 1771581 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:28:08.390156 1771581 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:08.390222 1771581 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:28:08.404979 1771581 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:28:08.405672 1771581 cni.go:84] Creating CNI manager for ""
	I0127 12:28:08.405721 1771581 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:08.405770 1771581 start.go:340] cluster config:
	{Name:pause-502641 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-502641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:08.405925 1771581 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:08.407720 1771581 out.go:177] * Starting "pause-502641" primary control-plane node in "pause-502641" cluster
	I0127 12:28:08.408722 1771581 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:08.408756 1771581 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:28:08.408766 1771581 cache.go:56] Caching tarball of preloaded images
	I0127 12:28:08.408860 1771581 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:28:08.408875 1771581 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:28:08.409003 1771581 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/config.json ...
	I0127 12:28:08.409185 1771581 start.go:360] acquireMachinesLock for pause-502641: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:28:08.409235 1771581 start.go:364] duration metric: took 30.348µs to acquireMachinesLock for "pause-502641"
	I0127 12:28:08.409254 1771581 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:28:08.409261 1771581 fix.go:54] fixHost starting: 
	I0127 12:28:08.409507 1771581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:08.409542 1771581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:08.423957 1771581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I0127 12:28:08.424375 1771581 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:08.424890 1771581 main.go:141] libmachine: Using API Version  1
	I0127 12:28:08.424921 1771581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:08.425297 1771581 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:08.425488 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:08.425618 1771581 main.go:141] libmachine: (pause-502641) Calling .GetState
	I0127 12:28:08.427803 1771581 fix.go:112] recreateIfNeeded on pause-502641: state=Running err=<nil>
	W0127 12:28:08.427827 1771581 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:28:08.429445 1771581 out.go:177] * Updating the running kvm2 "pause-502641" VM ...
	I0127 12:28:08.430475 1771581 machine.go:93] provisionDockerMachine start ...
	I0127 12:28:08.430497 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:08.430698 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:08.433378 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.433823 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:08.433840 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.434051 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:08.434247 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.434404 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.434552 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:08.434702 1771581 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:08.434975 1771581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.90 22 <nil> <nil>}
	I0127 12:28:08.434990 1771581 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:28:08.559793 1771581 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-502641
	
	I0127 12:28:08.559833 1771581 main.go:141] libmachine: (pause-502641) Calling .GetMachineName
	I0127 12:28:08.560097 1771581 buildroot.go:166] provisioning hostname "pause-502641"
	I0127 12:28:08.560140 1771581 main.go:141] libmachine: (pause-502641) Calling .GetMachineName
	I0127 12:28:08.560416 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:08.563680 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.564096 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:08.564141 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.564332 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:08.564543 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.564661 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.564814 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:08.565006 1771581 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:08.565230 1771581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.90 22 <nil> <nil>}
	I0127 12:28:08.565248 1771581 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-502641 && echo "pause-502641" | sudo tee /etc/hostname
	I0127 12:28:08.704709 1771581 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-502641
	
	I0127 12:28:08.704744 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:08.707826 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.708241 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:08.708294 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.708505 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:08.708692 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.708901 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:08.709065 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:08.709242 1771581 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:08.709425 1771581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.90 22 <nil> <nil>}
	I0127 12:28:08.709446 1771581 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-502641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-502641/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-502641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:28:08.823723 1771581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:28:08.823772 1771581 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:28:08.823808 1771581 buildroot.go:174] setting up certificates
	I0127 12:28:08.823830 1771581 provision.go:84] configureAuth start
	I0127 12:28:08.823849 1771581 main.go:141] libmachine: (pause-502641) Calling .GetMachineName
	I0127 12:28:08.824207 1771581 main.go:141] libmachine: (pause-502641) Calling .GetIP
	I0127 12:28:08.827397 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.827697 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:08.827742 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.827976 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:08.830529 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.830933 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:08.830961 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:08.831057 1771581 provision.go:143] copyHostCerts
	I0127 12:28:08.831126 1771581 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:28:08.831154 1771581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:28:08.831232 1771581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:28:08.831373 1771581 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:28:08.831390 1771581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:28:08.831424 1771581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:28:08.831544 1771581 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:28:08.831561 1771581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:28:08.831597 1771581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:28:08.831741 1771581 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.pause-502641 san=[127.0.0.1 192.168.83.90 localhost minikube pause-502641]
	I0127 12:28:09.040028 1771581 provision.go:177] copyRemoteCerts
	I0127 12:28:09.040089 1771581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:28:09.040116 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:09.042830 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:09.043107 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:09.043136 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:09.043308 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:09.043523 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:09.043686 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:09.043820 1771581 sshutil.go:53] new ssh client: &{IP:192.168.83.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/pause-502641/id_rsa Username:docker}
	I0127 12:28:09.128405 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:28:09.154272 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:28:09.182328 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:28:09.205950 1771581 provision.go:87] duration metric: took 382.105862ms to configureAuth
	I0127 12:28:09.205976 1771581 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:28:09.206227 1771581 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:09.206311 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:09.209060 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:09.209383 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:09.209425 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:09.209622 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:09.209798 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:09.209964 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:09.210133 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:09.210306 1771581 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:09.210485 1771581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.90 22 <nil> <nil>}
	I0127 12:28:09.210505 1771581 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:28:14.675300 1771581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:28:14.675332 1771581 machine.go:96] duration metric: took 6.244841339s to provisionDockerMachine
	I0127 12:28:14.675349 1771581 start.go:293] postStartSetup for "pause-502641" (driver="kvm2")
	I0127 12:28:14.675363 1771581 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:28:14.675413 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:14.675808 1771581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:28:14.675854 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:14.678602 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.678972 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:14.679001 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.679169 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:14.679358 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:14.679539 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:14.679687 1771581 sshutil.go:53] new ssh client: &{IP:192.168.83.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/pause-502641/id_rsa Username:docker}
	I0127 12:28:14.764469 1771581 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:28:14.768369 1771581 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:28:14.768391 1771581 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:28:14.768445 1771581 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:28:14.768550 1771581 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:28:14.768667 1771581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:28:14.777494 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:28:14.808234 1771581 start.go:296] duration metric: took 132.869063ms for postStartSetup
	I0127 12:28:14.808272 1771581 fix.go:56] duration metric: took 6.399010532s for fixHost
	I0127 12:28:14.808293 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:14.811419 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.811837 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:14.811866 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.812060 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:14.812299 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:14.812453 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:14.812596 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:14.812770 1771581 main.go:141] libmachine: Using SSH client type: native
	I0127 12:28:14.812979 1771581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.90 22 <nil> <nil>}
	I0127 12:28:14.812992 1771581 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:28:14.932078 1771581 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980894.924737591
	
	I0127 12:28:14.932108 1771581 fix.go:216] guest clock: 1737980894.924737591
	I0127 12:28:14.932119 1771581 fix.go:229] Guest: 2025-01-27 12:28:14.924737591 +0000 UTC Remote: 2025-01-27 12:28:14.808276062 +0000 UTC m=+6.544453336 (delta=116.461529ms)
	I0127 12:28:14.932148 1771581 fix.go:200] guest clock delta is within tolerance: 116.461529ms
	I0127 12:28:14.932163 1771581 start.go:83] releasing machines lock for "pause-502641", held for 6.52291628s
	I0127 12:28:14.932191 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:14.932470 1771581 main.go:141] libmachine: (pause-502641) Calling .GetIP
	I0127 12:28:14.935629 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.936092 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:14.936119 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.936306 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:14.936940 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:14.937156 1771581 main.go:141] libmachine: (pause-502641) Calling .DriverName
	I0127 12:28:14.937270 1771581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:28:14.937316 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:14.937396 1771581 ssh_runner.go:195] Run: cat /version.json
	I0127 12:28:14.937428 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHHostname
	I0127 12:28:14.940246 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.940433 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.940740 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:14.940766 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.940790 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:14.940805 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:14.940910 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:14.940994 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHPort
	I0127 12:28:14.941069 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:14.941138 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHKeyPath
	I0127 12:28:14.941199 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:14.941242 1771581 main.go:141] libmachine: (pause-502641) Calling .GetSSHUsername
	I0127 12:28:14.941329 1771581 sshutil.go:53] new ssh client: &{IP:192.168.83.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/pause-502641/id_rsa Username:docker}
	I0127 12:28:14.941399 1771581 sshutil.go:53] new ssh client: &{IP:192.168.83.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/pause-502641/id_rsa Username:docker}
	I0127 12:28:15.023679 1771581 ssh_runner.go:195] Run: systemctl --version
	I0127 12:28:15.056090 1771581 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:28:15.217462 1771581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:28:15.224349 1771581 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:28:15.224421 1771581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:28:15.233611 1771581 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 12:28:15.233632 1771581 start.go:495] detecting cgroup driver to use...
	I0127 12:28:15.233698 1771581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:28:15.253394 1771581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:28:15.267589 1771581 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:28:15.267658 1771581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:28:15.280434 1771581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:28:15.293491 1771581 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:28:15.432621 1771581 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:28:15.567760 1771581 docker.go:233] disabling docker service ...
	I0127 12:28:15.567836 1771581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:28:15.583675 1771581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:28:15.596823 1771581 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:28:15.731405 1771581 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:28:15.868950 1771581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:28:15.883040 1771581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:28:15.900940 1771581 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:28:15.901002 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.910519 1771581 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:28:15.910582 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.923853 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.936867 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.947110 1771581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:28:15.956894 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.966355 1771581 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.976535 1771581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:28:15.986753 1771581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:28:15.995279 1771581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:28:16.003926 1771581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:16.134097 1771581 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:28:16.961534 1771581 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:28:16.961608 1771581 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:28:16.969615 1771581 start.go:563] Will wait 60s for crictl version
	I0127 12:28:16.969691 1771581 ssh_runner.go:195] Run: which crictl
	I0127 12:28:16.973307 1771581 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:28:17.008960 1771581 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:28:17.009052 1771581 ssh_runner.go:195] Run: crio --version
	I0127 12:28:17.039418 1771581 ssh_runner.go:195] Run: crio --version
	I0127 12:28:17.072323 1771581 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:28:17.073458 1771581 main.go:141] libmachine: (pause-502641) Calling .GetIP
	I0127 12:28:17.075857 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:17.076220 1771581 main.go:141] libmachine: (pause-502641) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:77:81", ip: ""} in network mk-pause-502641: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:01 +0000 UTC Type:0 Mac:52:54:00:88:77:81 Iaid: IPaddr:192.168.83.90 Prefix:24 Hostname:pause-502641 Clientid:01:52:54:00:88:77:81}
	I0127 12:28:17.076251 1771581 main.go:141] libmachine: (pause-502641) DBG | domain pause-502641 has defined IP address 192.168.83.90 and MAC address 52:54:00:88:77:81 in network mk-pause-502641
	I0127 12:28:17.076414 1771581 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 12:28:17.080498 1771581 kubeadm.go:883] updating cluster {Name:pause-502641 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-502641 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:28:17.080630 1771581 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:17.080692 1771581 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:28:17.125394 1771581 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:28:17.125418 1771581 crio.go:433] Images already preloaded, skipping extraction
	I0127 12:28:17.125466 1771581 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:28:17.161434 1771581 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:28:17.161460 1771581 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:28:17.161468 1771581 kubeadm.go:934] updating node { 192.168.83.90 8443 v1.32.1 crio true true} ...
	I0127 12:28:17.161568 1771581 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-502641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-502641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:28:17.161637 1771581 ssh_runner.go:195] Run: crio config
	I0127 12:28:17.208182 1771581 cni.go:84] Creating CNI manager for ""
	I0127 12:28:17.208214 1771581 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:17.208226 1771581 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:28:17.208257 1771581 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.90 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-502641 NodeName:pause-502641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:28:17.208415 1771581 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-502641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:28:17.208496 1771581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:28:17.218325 1771581 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:28:17.218387 1771581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:28:17.227693 1771581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 12:28:17.244660 1771581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:28:17.261510 1771581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 12:28:17.277446 1771581 ssh_runner.go:195] Run: grep 192.168.83.90	control-plane.minikube.internal$ /etc/hosts
	I0127 12:28:17.281013 1771581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:17.411435 1771581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:28:17.425273 1771581 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641 for IP: 192.168.83.90
	I0127 12:28:17.425301 1771581 certs.go:194] generating shared ca certs ...
	I0127 12:28:17.425325 1771581 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:17.425506 1771581 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:28:17.425566 1771581 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:28:17.425578 1771581 certs.go:256] generating profile certs ...
	I0127 12:28:17.425673 1771581 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/client.key
	I0127 12:28:17.425766 1771581 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/apiserver.key.014cfbd8
	I0127 12:28:17.425807 1771581 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/proxy-client.key
	I0127 12:28:17.425924 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:28:17.425953 1771581 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:28:17.425962 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:28:17.425983 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:28:17.426007 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:28:17.426027 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:28:17.426069 1771581 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:28:17.426839 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:28:17.451585 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:28:17.474553 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:28:17.497429 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:28:17.519486 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:28:17.542433 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:28:17.566281 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:28:17.588350 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/pause-502641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:28:17.614214 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:28:17.638463 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:28:17.659236 1771581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:28:17.682046 1771581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:28:17.697695 1771581 ssh_runner.go:195] Run: openssl version
	I0127 12:28:17.703625 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:28:17.714070 1771581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:28:17.718237 1771581 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:28:17.718288 1771581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:28:17.723253 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:28:17.732311 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:28:17.742302 1771581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:28:17.746144 1771581 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:28:17.746181 1771581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:28:17.751705 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:28:17.760355 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:28:17.771252 1771581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:17.775328 1771581 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:17.775365 1771581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:28:17.780688 1771581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:28:17.789867 1771581 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:28:17.794101 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:28:17.803627 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:28:17.836947 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:28:17.858507 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:28:17.876383 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:28:17.900200 1771581 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:28:17.950170 1771581 kubeadm.go:392] StartCluster: {Name:pause-502641 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-502641 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:28:17.950293 1771581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:28:17.950377 1771581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:28:18.212202 1771581 cri.go:89] found id: "e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195"
	I0127 12:28:18.212234 1771581 cri.go:89] found id: "e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f"
	I0127 12:28:18.212240 1771581 cri.go:89] found id: "ad5d42a550c9a79775cb9394104ff467137abb7c90e74a9367e650f168859bf5"
	I0127 12:28:18.212245 1771581 cri.go:89] found id: "97c1d9e4d7d2b165cb5ac0b67b9ecca95c45e3633725ea6ed3dab7df3a7d4e86"
	I0127 12:28:18.212249 1771581 cri.go:89] found id: "efea1a674441694a2558ce316bdc3b86b1ae51d8849363cb91972fb939476f51"
	I0127 12:28:18.212254 1771581 cri.go:89] found id: "b584e649ee8469ebc66f1a79e1b9bfb2e14ac33ee53edede1c76934789eb7ed4"
	I0127 12:28:18.212258 1771581 cri.go:89] found id: ""
	I0127 12:28:18.212316 1771581 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-502641 -n pause-502641
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-502641 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-502641 logs -n 25: (1.213694801s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat              | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat              | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo find             | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo crio             | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-956477                       | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	| start   | -p cert-options-324519                 | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:27 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-980891 ssh cat      | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-980891           | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	| start   | -p pause-502641 --memory=2048          | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:28 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-324519 ssh                | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-324519 -- sudo         | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-324519                 | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p old-k8s-version-488586              | old-k8s-version-488586    | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-502641                        | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	| start   | -p no-preload-472479                   | no-preload-472479         | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:28:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:28:39.996015 1772097 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:28:39.996240 1772097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:39.996249 1772097 out.go:358] Setting ErrFile to fd 2...
	I0127 12:28:39.996253 1772097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:39.996401 1772097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:28:39.996967 1772097 out.go:352] Setting JSON to false
	I0127 12:28:39.997934 1772097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33061,"bootTime":1737947859,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:28:39.998031 1772097 start.go:139] virtualization: kvm guest
	I0127 12:28:39.999875 1772097 out.go:177] * [no-preload-472479] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:28:40.001182 1772097 notify.go:220] Checking for updates...
	I0127 12:28:40.001194 1772097 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:28:40.002265 1772097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:28:40.003441 1772097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:40.004588 1772097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.005723 1772097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:28:40.006883 1772097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:28:40.008443 1772097 config.go:182] Loaded profile config "cert-expiration-103712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:40.008559 1772097 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:28:40.008677 1772097 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:40.008769 1772097 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:28:40.043279 1772097 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:28:40.044338 1772097 start.go:297] selected driver: kvm2
	I0127 12:28:40.044355 1772097 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:28:40.044365 1772097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:28:40.045031 1772097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.045116 1772097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:28:40.059515 1772097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:28:40.059556 1772097 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:28:40.059789 1772097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:28:40.059817 1772097 cni.go:84] Creating CNI manager for ""
	I0127 12:28:40.059858 1772097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:40.059867 1772097 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:28:40.059912 1772097 start.go:340] cluster config:
	{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0127 12:28:40.060008 1772097 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.061421 1772097 out.go:177] * Starting "no-preload-472479" primary control-plane node in "no-preload-472479" cluster
	I0127 12:28:40.062498 1772097 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:40.062642 1772097 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json ...
	I0127 12:28:40.062674 1772097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json: {Name:mkedfbfbfe1ebe6cb6a7a447dd39f5cbd4480c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:40.062769 1772097 cache.go:107] acquiring lock: {Name:mkb25515b3b95c5192227a9f8b73580df8690d67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062786 1772097 cache.go:107] acquiring lock: {Name:mk639f71a3608ebd880c09c6f4eb9a539098cf11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062791 1772097 cache.go:107] acquiring lock: {Name:mkda62c534daf9b50eef3a3b72d1af9f7ff250f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062820 1772097 cache.go:107] acquiring lock: {Name:mked314d62a39ef0534a0d0db17e6c54c2b2c2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062902 1772097 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 12:28:40.062951 1772097 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.744µs
	I0127 12:28:40.062996 1772097 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 12:28:40.062994 1772097 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:28:40.063014 1772097 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:28:40.063013 1772097 cache.go:107] acquiring lock: {Name:mk0ad24c2418ae07d65df52baee7ca3e4777ce5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.063013 1772097 cache.go:107] acquiring lock: {Name:mk9b4a8e0176725482a193dc85ee9e3de8f76e70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062897 1772097 cache.go:107] acquiring lock: {Name:mk7d3e8c31e3028ac530b433216d6548161f2b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062894 1772097 start.go:360] acquireMachinesLock for no-preload-472479: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:28:40.063204 1772097 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:28:40.063224 1772097 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:28:40.062969 1772097 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 12:28:40.063280 1772097 start.go:364] duration metric: took 68.159µs to acquireMachinesLock for "no-preload-472479"
	I0127 12:28:40.063300 1772097 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:28:40.062940 1772097 cache.go:107] acquiring lock: {Name:mk7e91ce66d7bc99a7dd43c311bf67c378549dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.063318 1772097 start.go:93] Provisioning new machine with config: &{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:28:40.063445 1772097 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:28:40.063451 1772097 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:28:38.608071 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:41.107284 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:40.064193 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:28:40.064261 1772097 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:28:40.064263 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:28:40.064263 1772097 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:28:40.064340 1772097 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 12:28:40.064196 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:28:40.064594 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:28:40.065117 1772097 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:28:40.065317 1772097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:40.065362 1772097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:40.080504 1772097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0127 12:28:40.080932 1772097 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:40.081448 1772097 main.go:141] libmachine: Using API Version  1
	I0127 12:28:40.081469 1772097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:40.081779 1772097 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:40.081962 1772097 main.go:141] libmachine: (no-preload-472479) Calling .GetMachineName
	I0127 12:28:40.082120 1772097 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:28:40.082305 1772097 start.go:159] libmachine.API.Create for "no-preload-472479" (driver="kvm2")
	I0127 12:28:40.082341 1772097 client.go:168] LocalClient.Create starting
	I0127 12:28:40.082382 1772097 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:28:40.082419 1772097 main.go:141] libmachine: Decoding PEM data...
	I0127 12:28:40.082442 1772097 main.go:141] libmachine: Parsing certificate...
	I0127 12:28:40.082509 1772097 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:28:40.082534 1772097 main.go:141] libmachine: Decoding PEM data...
	I0127 12:28:40.082554 1772097 main.go:141] libmachine: Parsing certificate...
	I0127 12:28:40.082578 1772097 main.go:141] libmachine: Running pre-create checks...
	I0127 12:28:40.082596 1772097 main.go:141] libmachine: (no-preload-472479) Calling .PreCreateCheck
	I0127 12:28:40.082948 1772097 main.go:141] libmachine: (no-preload-472479) Calling .GetConfigRaw
	I0127 12:28:40.083325 1772097 main.go:141] libmachine: Creating machine...
	I0127 12:28:40.083338 1772097 main.go:141] libmachine: (no-preload-472479) Calling .Create
	I0127 12:28:40.083474 1772097 main.go:141] libmachine: (no-preload-472479) creating KVM machine...
	I0127 12:28:40.083485 1772097 main.go:141] libmachine: (no-preload-472479) creating network...
	I0127 12:28:40.084732 1772097 main.go:141] libmachine: (no-preload-472479) DBG | found existing default KVM network
	I0127 12:28:40.086558 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.086416 1772120 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:28:40.088029 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.087940 1772120 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003561d0}
	I0127 12:28:40.088051 1772097 main.go:141] libmachine: (no-preload-472479) DBG | created network xml: 
	I0127 12:28:40.088063 1772097 main.go:141] libmachine: (no-preload-472479) DBG | <network>
	I0127 12:28:40.088071 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <name>mk-no-preload-472479</name>
	I0127 12:28:40.088081 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <dns enable='no'/>
	I0127 12:28:40.088092 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   
	I0127 12:28:40.088102 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 12:28:40.088111 1772097 main.go:141] libmachine: (no-preload-472479) DBG |     <dhcp>
	I0127 12:28:40.088126 1772097 main.go:141] libmachine: (no-preload-472479) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 12:28:40.088135 1772097 main.go:141] libmachine: (no-preload-472479) DBG |     </dhcp>
	I0127 12:28:40.088157 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   </ip>
	I0127 12:28:40.088176 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   
	I0127 12:28:40.088182 1772097 main.go:141] libmachine: (no-preload-472479) DBG | </network>
	I0127 12:28:40.088190 1772097 main.go:141] libmachine: (no-preload-472479) DBG | 
	I0127 12:28:40.092942 1772097 main.go:141] libmachine: (no-preload-472479) DBG | trying to create private KVM network mk-no-preload-472479 192.168.50.0/24...
	I0127 12:28:40.167283 1772097 main.go:141] libmachine: (no-preload-472479) DBG | private KVM network mk-no-preload-472479 192.168.50.0/24 created
	I0127 12:28:40.167331 1772097 main.go:141] libmachine: (no-preload-472479) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 ...
	I0127 12:28:40.167351 1772097 main.go:141] libmachine: (no-preload-472479) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:28:40.167453 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.167366 1772120 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.167606 1772097 main.go:141] libmachine: (no-preload-472479) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:28:40.274148 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 12:28:40.281697 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0127 12:28:40.294774 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 12:28:40.295327 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 12:28:40.307335 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 12:28:40.311571 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 12:28:40.329481 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 12:28:40.418130 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 12:28:40.418158 1772097 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 355.372631ms
	I0127 12:28:40.418177 1772097 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 12:28:40.454647 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.454533 1772120 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa...
	I0127 12:28:40.652731 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.652597 1772120 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/no-preload-472479.rawdisk...
	I0127 12:28:40.652755 1772097 main.go:141] libmachine: (no-preload-472479) DBG | Writing magic tar header
	I0127 12:28:40.652795 1772097 main.go:141] libmachine: (no-preload-472479) DBG | Writing SSH key tar header
	I0127 12:28:40.652803 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.652709 1772120 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 ...
	I0127 12:28:40.652814 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479
	I0127 12:28:40.652929 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:28:40.652960 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.652974 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 (perms=drwx------)
	I0127 12:28:40.652988 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:28:40.653004 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:28:40.653015 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:28:40.653028 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins
	I0127 12:28:40.653040 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home
	I0127 12:28:40.653054 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:28:40.653063 1772097 main.go:141] libmachine: (no-preload-472479) DBG | skipping /home - not owner
	I0127 12:28:40.653082 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:28:40.653095 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:28:40.653108 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:28:40.653117 1772097 main.go:141] libmachine: (no-preload-472479) creating domain...
	I0127 12:28:40.654183 1772097 main.go:141] libmachine: (no-preload-472479) define libvirt domain using xml: 
	I0127 12:28:40.654205 1772097 main.go:141] libmachine: (no-preload-472479) <domain type='kvm'>
	I0127 12:28:40.654215 1772097 main.go:141] libmachine: (no-preload-472479)   <name>no-preload-472479</name>
	I0127 12:28:40.654222 1772097 main.go:141] libmachine: (no-preload-472479)   <memory unit='MiB'>2200</memory>
	I0127 12:28:40.654231 1772097 main.go:141] libmachine: (no-preload-472479)   <vcpu>2</vcpu>
	I0127 12:28:40.654240 1772097 main.go:141] libmachine: (no-preload-472479)   <features>
	I0127 12:28:40.654252 1772097 main.go:141] libmachine: (no-preload-472479)     <acpi/>
	I0127 12:28:40.654263 1772097 main.go:141] libmachine: (no-preload-472479)     <apic/>
	I0127 12:28:40.654276 1772097 main.go:141] libmachine: (no-preload-472479)     <pae/>
	I0127 12:28:40.654305 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654325 1772097 main.go:141] libmachine: (no-preload-472479)   </features>
	I0127 12:28:40.654336 1772097 main.go:141] libmachine: (no-preload-472479)   <cpu mode='host-passthrough'>
	I0127 12:28:40.654347 1772097 main.go:141] libmachine: (no-preload-472479)   
	I0127 12:28:40.654361 1772097 main.go:141] libmachine: (no-preload-472479)   </cpu>
	I0127 12:28:40.654372 1772097 main.go:141] libmachine: (no-preload-472479)   <os>
	I0127 12:28:40.654382 1772097 main.go:141] libmachine: (no-preload-472479)     <type>hvm</type>
	I0127 12:28:40.654393 1772097 main.go:141] libmachine: (no-preload-472479)     <boot dev='cdrom'/>
	I0127 12:28:40.654404 1772097 main.go:141] libmachine: (no-preload-472479)     <boot dev='hd'/>
	I0127 12:28:40.654419 1772097 main.go:141] libmachine: (no-preload-472479)     <bootmenu enable='no'/>
	I0127 12:28:40.654430 1772097 main.go:141] libmachine: (no-preload-472479)   </os>
	I0127 12:28:40.654437 1772097 main.go:141] libmachine: (no-preload-472479)   <devices>
	I0127 12:28:40.654451 1772097 main.go:141] libmachine: (no-preload-472479)     <disk type='file' device='cdrom'>
	I0127 12:28:40.654467 1772097 main.go:141] libmachine: (no-preload-472479)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/boot2docker.iso'/>
	I0127 12:28:40.654480 1772097 main.go:141] libmachine: (no-preload-472479)       <target dev='hdc' bus='scsi'/>
	I0127 12:28:40.654493 1772097 main.go:141] libmachine: (no-preload-472479)       <readonly/>
	I0127 12:28:40.654505 1772097 main.go:141] libmachine: (no-preload-472479)     </disk>
	I0127 12:28:40.654516 1772097 main.go:141] libmachine: (no-preload-472479)     <disk type='file' device='disk'>
	I0127 12:28:40.654529 1772097 main.go:141] libmachine: (no-preload-472479)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:28:40.654543 1772097 main.go:141] libmachine: (no-preload-472479)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/no-preload-472479.rawdisk'/>
	I0127 12:28:40.654554 1772097 main.go:141] libmachine: (no-preload-472479)       <target dev='hda' bus='virtio'/>
	I0127 12:28:40.654567 1772097 main.go:141] libmachine: (no-preload-472479)     </disk>
	I0127 12:28:40.654581 1772097 main.go:141] libmachine: (no-preload-472479)     <interface type='network'>
	I0127 12:28:40.654594 1772097 main.go:141] libmachine: (no-preload-472479)       <source network='mk-no-preload-472479'/>
	I0127 12:28:40.654605 1772097 main.go:141] libmachine: (no-preload-472479)       <model type='virtio'/>
	I0127 12:28:40.654618 1772097 main.go:141] libmachine: (no-preload-472479)     </interface>
	I0127 12:28:40.654628 1772097 main.go:141] libmachine: (no-preload-472479)     <interface type='network'>
	I0127 12:28:40.654640 1772097 main.go:141] libmachine: (no-preload-472479)       <source network='default'/>
	I0127 12:28:40.654655 1772097 main.go:141] libmachine: (no-preload-472479)       <model type='virtio'/>
	I0127 12:28:40.654667 1772097 main.go:141] libmachine: (no-preload-472479)     </interface>
	I0127 12:28:40.654678 1772097 main.go:141] libmachine: (no-preload-472479)     <serial type='pty'>
	I0127 12:28:40.654690 1772097 main.go:141] libmachine: (no-preload-472479)       <target port='0'/>
	I0127 12:28:40.654701 1772097 main.go:141] libmachine: (no-preload-472479)     </serial>
	I0127 12:28:40.654713 1772097 main.go:141] libmachine: (no-preload-472479)     <console type='pty'>
	I0127 12:28:40.654728 1772097 main.go:141] libmachine: (no-preload-472479)       <target type='serial' port='0'/>
	I0127 12:28:40.654739 1772097 main.go:141] libmachine: (no-preload-472479)     </console>
	I0127 12:28:40.654759 1772097 main.go:141] libmachine: (no-preload-472479)     <rng model='virtio'>
	I0127 12:28:40.654772 1772097 main.go:141] libmachine: (no-preload-472479)       <backend model='random'>/dev/random</backend>
	I0127 12:28:40.654785 1772097 main.go:141] libmachine: (no-preload-472479)     </rng>
	I0127 12:28:40.654793 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654801 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654811 1772097 main.go:141] libmachine: (no-preload-472479)   </devices>
	I0127 12:28:40.654820 1772097 main.go:141] libmachine: (no-preload-472479) </domain>
	I0127 12:28:40.654829 1772097 main.go:141] libmachine: (no-preload-472479) 
	I0127 12:28:40.659135 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:25:1b:94 in network default
	I0127 12:28:40.659790 1772097 main.go:141] libmachine: (no-preload-472479) starting domain...
	I0127 12:28:40.659807 1772097 main.go:141] libmachine: (no-preload-472479) ensuring networks are active...
	I0127 12:28:40.659815 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:40.660712 1772097 main.go:141] libmachine: (no-preload-472479) Ensuring network default is active
	I0127 12:28:40.661077 1772097 main.go:141] libmachine: (no-preload-472479) Ensuring network mk-no-preload-472479 is active
	I0127 12:28:40.661602 1772097 main.go:141] libmachine: (no-preload-472479) getting domain XML...
	I0127 12:28:40.662545 1772097 main.go:141] libmachine: (no-preload-472479) creating domain...
	I0127 12:28:40.804131 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 12:28:40.804161 1772097 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 741.199308ms
	I0127 12:28:40.804175 1772097 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 12:28:41.983370 1772097 main.go:141] libmachine: (no-preload-472479) waiting for IP...
	I0127 12:28:41.984407 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:41.985311 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:41.985368 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:41.984974 1772120 retry.go:31] will retry after 298.567672ms: waiting for domain to come up
	I0127 12:28:41.997119 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 12:28:41.997152 1772097 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 1.934385794s
	I0127 12:28:41.997168 1772097 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 12:28:42.052043 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 12:28:42.052075 1772097 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 1.989185759s
	I0127 12:28:42.052092 1772097 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 12:28:42.119549 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 12:28:42.119572 1772097 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.056679001s
	I0127 12:28:42.119584 1772097 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 12:28:42.175623 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 12:28:42.175657 1772097 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 2.112922654s
	I0127 12:28:42.175670 1772097 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 12:28:42.285760 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.286350 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.286377 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.286315 1772120 retry.go:31] will retry after 291.81021ms: waiting for domain to come up
	I0127 12:28:42.396687 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 12:28:42.396719 1772097 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.333749498s
	I0127 12:28:42.396733 1772097 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 12:28:42.396753 1772097 cache.go:87] Successfully saved all images to host disk.
	I0127 12:28:42.579831 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.580411 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.580434 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.580375 1772120 retry.go:31] will retry after 353.939815ms: waiting for domain to come up
	I0127 12:28:42.935963 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.936536 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.936566 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.936470 1772120 retry.go:31] will retry after 481.22611ms: waiting for domain to come up
	I0127 12:28:43.419185 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:43.419672 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:43.419701 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:43.419641 1772120 retry.go:31] will retry after 732.731082ms: waiting for domain to come up
	I0127 12:28:44.153554 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:44.154045 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:44.154097 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:44.154017 1772120 retry.go:31] will retry after 939.503013ms: waiting for domain to come up
	I0127 12:28:42.233510 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:28:42.233816 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:28:43.608133 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:45.608511 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:48.108087 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:45.094815 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:45.095297 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:45.095366 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:45.095297 1772120 retry.go:31] will retry after 1.113065701s: waiting for domain to come up
	I0127 12:28:46.210655 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:46.211204 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:46.211237 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:46.211156 1772120 retry.go:31] will retry after 1.043624254s: waiting for domain to come up
	I0127 12:28:47.256190 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:47.256623 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:47.256693 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:47.256615 1772120 retry.go:31] will retry after 1.732212198s: waiting for domain to come up
	I0127 12:28:48.990952 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:48.991464 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:48.991493 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:48.991430 1772120 retry.go:31] will retry after 1.408754908s: waiting for domain to come up
	I0127 12:28:50.108748 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:52.107149 1771581 pod_ready.go:93] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.107172 1771581 pod_ready.go:82] duration metric: took 15.505827552s for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.107182 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.110955 1771581 pod_ready.go:93] pod "kube-apiserver-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.110973 1771581 pod_ready.go:82] duration metric: took 3.784544ms for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.110982 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.115587 1771581 pod_ready.go:93] pod "kube-controller-manager-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.115603 1771581 pod_ready.go:82] duration metric: took 4.614741ms for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.115612 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.121259 1771581 pod_ready.go:93] pod "kube-proxy-d2qrr" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.121277 1771581 pod_ready.go:82] duration metric: took 5.659419ms for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.121286 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.124700 1771581 pod_ready.go:93] pod "kube-scheduler-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.124718 1771581 pod_ready.go:82] duration metric: took 3.424354ms for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.124727 1771581 pod_ready.go:39] duration metric: took 15.534483199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:52.124748 1771581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:28:52.136119 1771581 ops.go:34] apiserver oom_adj: -16
	I0127 12:28:52.136139 1771581 kubeadm.go:597] duration metric: took 33.809616415s to restartPrimaryControlPlane
	I0127 12:28:52.136148 1771581 kubeadm.go:394] duration metric: took 34.185988527s to StartCluster
	I0127 12:28:52.136169 1771581 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:52.136252 1771581 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:52.137501 1771581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:52.137754 1771581 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:28:52.137883 1771581 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:28:52.137973 1771581 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:52.139357 1771581 out.go:177] * Verifying Kubernetes components...
	I0127 12:28:52.140049 1771581 out.go:177] * Enabled addons: 
	I0127 12:28:52.140655 1771581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:52.141351 1771581 addons.go:514] duration metric: took 3.484373ms for enable addons: enabled=[]
	I0127 12:28:52.292597 1771581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:28:52.306890 1771581 node_ready.go:35] waiting up to 6m0s for node "pause-502641" to be "Ready" ...
	I0127 12:28:52.309442 1771581 node_ready.go:49] node "pause-502641" has status "Ready":"True"
	I0127 12:28:52.309468 1771581 node_ready.go:38] duration metric: took 2.532758ms for node "pause-502641" to be "Ready" ...
	I0127 12:28:52.309478 1771581 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:52.507703 1771581 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.905022 1771581 pod_ready.go:93] pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.905047 1771581 pod_ready.go:82] duration metric: took 397.305948ms for pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.905057 1771581 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:50.402066 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:50.402605 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:50.402642 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:50.402555 1772120 retry.go:31] will retry after 1.870396592s: waiting for domain to come up
	I0127 12:28:52.274269 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:52.274653 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:52.274696 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:52.274626 1772120 retry.go:31] will retry after 2.848763778s: waiting for domain to come up
	I0127 12:28:53.304682 1771581 pod_ready.go:93] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:53.304706 1771581 pod_ready.go:82] duration metric: took 399.642211ms for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.304716 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.704850 1771581 pod_ready.go:93] pod "kube-apiserver-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:53.704877 1771581 pod_ready.go:82] duration metric: took 400.154008ms for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.704888 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.104741 1771581 pod_ready.go:93] pod "kube-controller-manager-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.104768 1771581 pod_ready.go:82] duration metric: took 399.873704ms for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.104778 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.504698 1771581 pod_ready.go:93] pod "kube-proxy-d2qrr" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.504723 1771581 pod_ready.go:82] duration metric: took 399.936176ms for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.504734 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.904914 1771581 pod_ready.go:93] pod "kube-scheduler-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.904942 1771581 pod_ready.go:82] duration metric: took 400.201404ms for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.904950 1771581 pod_ready.go:39] duration metric: took 2.595461317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:54.904967 1771581 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:28:54.905017 1771581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:28:54.920191 1771581 api_server.go:72] duration metric: took 2.782401263s to wait for apiserver process to appear ...
	I0127 12:28:54.920220 1771581 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:28:54.920236 1771581 api_server.go:253] Checking apiserver healthz at https://192.168.83.90:8443/healthz ...
	I0127 12:28:54.927166 1771581 api_server.go:279] https://192.168.83.90:8443/healthz returned 200:
	ok
	I0127 12:28:54.928174 1771581 api_server.go:141] control plane version: v1.32.1
	I0127 12:28:54.928197 1771581 api_server.go:131] duration metric: took 7.971867ms to wait for apiserver health ...
	I0127 12:28:54.928207 1771581 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:28:55.107795 1771581 system_pods.go:59] 6 kube-system pods found
	I0127 12:28:55.107829 1771581 system_pods.go:61] "coredns-668d6bf9bc-qrwg2" [1b76c781-b24e-464f-a27f-95295231951e] Running
	I0127 12:28:55.107836 1771581 system_pods.go:61] "etcd-pause-502641" [90f44ece-1535-4998-9248-d8fa48eaabc4] Running
	I0127 12:28:55.107841 1771581 system_pods.go:61] "kube-apiserver-pause-502641" [fe317602-9c59-4bb0-a26a-743c2ec3bfac] Running
	I0127 12:28:55.107846 1771581 system_pods.go:61] "kube-controller-manager-pause-502641" [e634f881-2616-46fb-9853-7d45aea66aab] Running
	I0127 12:28:55.107852 1771581 system_pods.go:61] "kube-proxy-d2qrr" [2ba3c737-544f-4e4d-9de2-9f0180e87605] Running
	I0127 12:28:55.107857 1771581 system_pods.go:61] "kube-scheduler-pause-502641" [5847e715-27f8-43d2-8591-0a16effc0680] Running
	I0127 12:28:55.107865 1771581 system_pods.go:74] duration metric: took 179.650814ms to wait for pod list to return data ...
	I0127 12:28:55.107874 1771581 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:28:55.305204 1771581 default_sa.go:45] found service account: "default"
	I0127 12:28:55.305239 1771581 default_sa.go:55] duration metric: took 197.356985ms for default service account to be created ...
	I0127 12:28:55.305252 1771581 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:28:55.506556 1771581 system_pods.go:87] 6 kube-system pods found
	I0127 12:28:55.705570 1771581 system_pods.go:105] "coredns-668d6bf9bc-qrwg2" [1b76c781-b24e-464f-a27f-95295231951e] Running
	I0127 12:28:55.705594 1771581 system_pods.go:105] "etcd-pause-502641" [90f44ece-1535-4998-9248-d8fa48eaabc4] Running
	I0127 12:28:55.705601 1771581 system_pods.go:105] "kube-apiserver-pause-502641" [fe317602-9c59-4bb0-a26a-743c2ec3bfac] Running
	I0127 12:28:55.705606 1771581 system_pods.go:105] "kube-controller-manager-pause-502641" [e634f881-2616-46fb-9853-7d45aea66aab] Running
	I0127 12:28:55.705611 1771581 system_pods.go:105] "kube-proxy-d2qrr" [2ba3c737-544f-4e4d-9de2-9f0180e87605] Running
	I0127 12:28:55.705615 1771581 system_pods.go:105] "kube-scheduler-pause-502641" [5847e715-27f8-43d2-8591-0a16effc0680] Running
	I0127 12:28:55.705623 1771581 system_pods.go:147] duration metric: took 400.364037ms to wait for k8s-apps to be running ...
	I0127 12:28:55.705632 1771581 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:28:55.705681 1771581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:55.719702 1771581 system_svc.go:56] duration metric: took 14.059568ms WaitForService to wait for kubelet
	I0127 12:28:55.719732 1771581 kubeadm.go:582] duration metric: took 3.581948245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:28:55.719751 1771581 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:28:55.905681 1771581 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:28:55.905710 1771581 node_conditions.go:123] node cpu capacity is 2
	I0127 12:28:55.905723 1771581 node_conditions.go:105] duration metric: took 185.967606ms to run NodePressure ...
	I0127 12:28:55.905739 1771581 start.go:241] waiting for startup goroutines ...
	I0127 12:28:55.905748 1771581 start.go:246] waiting for cluster config update ...
	I0127 12:28:55.905757 1771581 start.go:255] writing updated cluster config ...
	I0127 12:28:55.906071 1771581 ssh_runner.go:195] Run: rm -f paused
	I0127 12:28:55.956952 1771581 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:28:55.958486 1771581 out.go:177] * Done! kubectl is now configured to use "pause-502641" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.577450535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980936577424541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=093f77d2-3759-47c7-a19e-d40d68595663 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.577993131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a9abc13-96c8-4882-ad94-b95563a8aa71 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.578073178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a9abc13-96c8-4882-ad94-b95563a8aa71 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.579045590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a9abc13-96c8-4882-ad94-b95563a8aa71 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.618565839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=845d6445-9f86-46e0-8c32-373478428776 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.618631433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=845d6445-9f86-46e0-8c32-373478428776 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.620029002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d627b1ed-1005-4799-a6f0-714205bb2ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.620360499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980936620341701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d627b1ed-1005-4799-a6f0-714205bb2ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.620857466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daee9ba5-57bd-4587-8a27-c050c0edc6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.620942633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daee9ba5-57bd-4587-8a27-c050c0edc6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.621170590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daee9ba5-57bd-4587-8a27-c050c0edc6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.659703767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa9f5bcb-8960-4682-8cd3-f4aeea505634 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.659794534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa9f5bcb-8960-4682-8cd3-f4aeea505634 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.661271572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8aa44a87-f879-41fc-a980-67b3637b5dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.661634820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980936661614974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8aa44a87-f879-41fc-a980-67b3637b5dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.662193413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=717e8eca-d014-4976-81dd-b2c9bfb6a846 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.662253571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=717e8eca-d014-4976-81dd-b2c9bfb6a846 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.662473774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=717e8eca-d014-4976-81dd-b2c9bfb6a846 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.701347103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9371a48-041f-4508-8130-558f2ecc33e4 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.701419913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9371a48-041f-4508-8130-558f2ecc33e4 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.702491276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9d18335-c696-4558-bf7a-73a3401eba12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.702855404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980936702835094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9d18335-c696-4558-bf7a-73a3401eba12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.703503393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f179959c-b05c-45d2-bb45-7009c72dcc80 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.703557978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f179959c-b05c-45d2-bb45-7009c72dcc80 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:56 pause-502641 crio[2124]: time="2025-01-27 12:28:56.703786723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f179959c-b05c-45d2-bb45-7009c72dcc80 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f994ab9d0c9cc       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   24 seconds ago       Running             kube-controller-manager   2                   d06bdda67bf3c       kube-controller-manager-pause-502641
	e6f03254be7fc       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   24 seconds ago       Running             kube-scheduler            2                   96b1dcfa14c23       kube-scheduler-pause-502641
	a7a9b4eb87053       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago       Running             etcd                      2                   28f5f35deee13       etcd-pause-502641
	42d60156090c2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   24 seconds ago       Running             kube-apiserver            2                   7831575bc0ac4       kube-apiserver-pause-502641
	bc9ea111c52fa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   37 seconds ago       Running             coredns                   1                   af08d08a4f4b2       coredns-668d6bf9bc-qrwg2
	f49dcc6de8861       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   38 seconds ago       Exited              etcd                      1                   28f5f35deee13       etcd-pause-502641
	80632174b7df0       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   38 seconds ago       Exited              kube-controller-manager   1                   d06bdda67bf3c       kube-controller-manager-pause-502641
	cdc12a65d7653       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   38 seconds ago       Exited              kube-apiserver            1                   7831575bc0ac4       kube-apiserver-pause-502641
	62fb6b61bb5ab       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   38 seconds ago       Running             kube-proxy                1                   f6d9111dcb33b       kube-proxy-d2qrr
	5f2e731b7a199       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   38 seconds ago       Exited              kube-scheduler            1                   96b1dcfa14c23       kube-scheduler-pause-502641
	e2852b5fafd3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   83733cedb1020       coredns-668d6bf9bc-qrwg2
	e9b1df0076dd8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   About a minute ago   Exited              kube-proxy                0                   15f8030c6915f       kube-proxy-d2qrr
	
	
	==> coredns [bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658] <==
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43524 - 13320 "HINFO IN 125212257908539037.1140348419135132907. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006846401s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2101664387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.215) (total time: 10786ms):
	Trace[2101664387]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer 10786ms (12:28:30.001)
	Trace[2101664387]: [10.78613414s] [10.78613414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1106148870]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.215) (total time: 10787ms):
	Trace[1106148870]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer 10787ms (12:28:30.002)
	Trace[1106148870]: [10.787302327s] [10.787302327s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1620791708]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.214) (total time: 10787ms):
	Trace[1620791708]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer 10787ms (12:28:30.002)
	Trace[1620791708]: [10.78798866s] [10.78798866s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59119 - 54153 "HINFO IN 2493060570586819184.7280973419994849227. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011331475s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1471642220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.299) (total time: 30001ms):
	Trace[1471642220]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:28:02.301)
	Trace[1471642220]: [30.001857467s] [30.001857467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1296202342]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.301) (total time: 30000ms):
	Trace[1296202342]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:28:02.302)
	Trace[1296202342]: [30.000662008s] [30.000662008s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[806502041]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.299) (total time: 30002ms):
	Trace[806502041]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:28:02.302)
	Trace[806502041]: [30.00251854s] [30.00251854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-502641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-502641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=pause-502641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_27_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-502641
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:28:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.90
	  Hostname:    pause-502641
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7b9d39c50b34d4fa1590cfc705a9046
	  System UUID:                a7b9d39c-50b3-4d4f-a159-0cfc705a9046
	  Boot ID:                    99446827-dbec-4287-9ca9-be203fb13ea4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qrwg2                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-pause-502641                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         90s
	  kube-system                 kube-apiserver-pause-502641             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-502641    200m (10%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-d2qrr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-pause-502641             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x7 over 96s)  kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeReady                89s                kubelet          Node pause-502641 status is now: NodeReady
	  Normal  RegisteredNode           86s                node-controller  Node pause-502641 event: Registered Node pause-502641 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node pause-502641 event: Registered Node pause-502641 in Controller
	
	
	==> dmesg <==
	[  +8.677579] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.053419] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065638] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.160761] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.153929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279561] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.002254] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.490980] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.057380] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994965] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.086003] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.819652] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.235740] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 12:28] kauditd_printk_skb: 50 callbacks suppressed
	[  +8.860927] systemd-fstab-generator[2048]: Ignoring "noauto" option for root device
	[  +0.138973] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +0.165010] systemd-fstab-generator[2075]: Ignoring "noauto" option for root device
	[  +0.137114] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.271873] systemd-fstab-generator[2115]: Ignoring "noauto" option for root device
	[  +1.277517] systemd-fstab-generator[2238]: Ignoring "noauto" option for root device
	[  +9.371424] kauditd_printk_skb: 197 callbacks suppressed
	[  +4.791745] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +0.390967] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.320070] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.990031] systemd-fstab-generator[3380]: Ignoring "noauto" option for root device
	
	
	==> etcd [a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57] <==
	{"level":"info","ts":"2025-01-27T12:28:32.562165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b switched to configuration voters=(16863480061887869707)"}
	{"level":"info","ts":"2025-01-27T12:28:32.562240Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.83.90:2380"}
	{"level":"info","ts":"2025-01-27T12:28:32.565779Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T12:28:32.566738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T12:28:32.569004Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"37b821ace5f4705e","local-member-id":"ea071dc50db5730b","added-peer-id":"ea071dc50db5730b","added-peer-peer-urls":["https://192.168.83.90:2380"]}
	{"level":"info","ts":"2025-01-27T12:28:32.569199Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"37b821ace5f4705e","local-member-id":"ea071dc50db5730b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:28:32.569324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:28:32.569261Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.83.90:2380"}
	{"level":"info","ts":"2025-01-27T12:28:32.569274Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T12:28:33.519022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b received MsgPreVoteResp from ea071dc50db5730b at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b received MsgVoteResp from ea071dc50db5730b at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became leader at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea071dc50db5730b elected leader ea071dc50db5730b at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.524610Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea071dc50db5730b","local-member-attributes":"{Name:pause-502641 ClientURLs:[https://192.168.83.90:2379]}","request-path":"/0/members/ea071dc50db5730b/attributes","cluster-id":"37b821ace5f4705e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T12:28:33.524694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:28:33.525322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:28:33.525989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:28:33.526144Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:28:33.526581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.90:2379"}
	{"level":"info","ts":"2025-01-27T12:28:33.526960Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:28:33.527188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:28:33.527219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846] <==
	
	
	==> kernel <==
	 12:28:57 up 2 min,  0 users,  load average: 0.31, 0.15, 0.06
	Linux pause-502641 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c] <==
	I0127 12:28:34.720704       1 policy_source.go:240] refreshing policies
	I0127 12:28:34.722754       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:28:34.724064       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:28:34.724113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:28:34.733719       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:28:34.734102       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:28:34.743450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:28:34.744987       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:28:34.745660       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:28:34.745800       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:28:34.746157       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:28:34.746223       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:28:34.746246       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:28:34.746267       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:28:34.759732       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0127 12:28:34.797249       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 12:28:34.816000       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:28:34.910923       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:28:35.625166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:28:36.431484       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:28:36.481838       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:28:36.514409       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:28:36.522855       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:28:38.137569       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:28:38.188301       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93] <==
	W0127 12:28:18.758254       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:28:18.758630       1 options.go:238] external host was not specified, using 192.168.83.90
	I0127 12:28:18.763563       1 server.go:143] Version: v1.32.1
	I0127 12:28:18.763603       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:19.383022       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0127 12:28:19.385075       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:19.385222       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 12:28:19.393836       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:28:19.400793       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:28:19.400824       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:28:19.401094       1 instance.go:233] Using reconciler: lease
	W0127 12:28:19.402979       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.386546       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.386647       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.403552       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:21.760233       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:21.872831       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:22.255001       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:24.126014       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:24.404438       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:25.233831       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:27.756957       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:28.715483       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383] <==
	
	
	==> kube-controller-manager [f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d] <==
	I0127 12:28:37.922949       1 shared_informer.go:320] Caches are synced for node
	I0127 12:28:37.923034       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:28:37.923114       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:28:37.923146       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:28:37.923171       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:28:37.923321       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-502641"
	I0127 12:28:37.926219       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:28:37.931389       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:28:37.931457       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:28:37.933760       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:28:37.933803       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:28:37.933910       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:28:37.935130       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:28:37.935183       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:28:37.935855       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:28:37.935907       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:28:37.942066       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:28:37.943251       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:28:37.944418       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:28:37.945080       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:28:37.965818       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:28:37.973288       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:28:37.978699       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:28:37.978737       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:28:37.978748       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342] <==
	 >
	E0127 12:28:19.049758       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:28:30.000202       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-502641\": dial tcp 192.168.83.90:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.83.90:57090->192.168.83.90:8443: read: connection reset by peer"
	E0127 12:28:31.170163       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-502641\": dial tcp 192.168.83.90:8443: connect: connection refused"
	I0127 12:28:34.778280       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.90"]
	E0127 12:28:34.778365       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:28:34.862341       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:28:34.862389       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:28:34.862413       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:28:34.867030       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:28:34.867360       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:28:34.867388       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:34.869536       1 config.go:199] "Starting service config controller"
	I0127 12:28:34.869591       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:28:34.869612       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:28:34.869630       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:28:34.872386       1 config.go:329] "Starting node config controller"
	I0127 12:28:34.872416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:28:34.970767       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:28:34.970817       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:28:34.973294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:27:32.072301       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:27:32.100806       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.90"]
	E0127 12:27:32.101046       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:27:32.175360       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:27:32.175552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:27:32.175754       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:27:32.182104       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:27:32.185227       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:27:32.185290       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:27:32.188174       1 config.go:199] "Starting service config controller"
	I0127 12:27:32.188748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:27:32.189035       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:27:32.189095       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:27:32.191254       1 config.go:329] "Starting node config controller"
	I0127 12:27:32.191351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:27:32.289792       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:27:32.289859       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:27:32.291500       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0] <==
	I0127 12:28:19.682746       1 serving.go:386] Generated self-signed cert in-memory
	W0127 12:28:30.001050       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.83.90:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.83.90:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.83.90:57112->192.168.83.90:8443: read: connection reset by peer
	W0127 12:28:30.001130       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:28:30.001139       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:28:30.013947       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:28:30.013966       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0127 12:28:30.013989       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0127 12:28:30.015962       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0127 12:28:30.016036       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0127 12:28:30.016084       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373] <==
	I0127 12:28:33.057253       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:28:34.798931       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:28:34.803150       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:34.836819       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:28:34.838750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:28:34.838960       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0127 12:28:34.838988       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0127 12:28:34.839330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:28:34.839351       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:28:34.839475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0127 12:28:34.839495       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 12:28:34.940528       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 12:28:34.940602       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0127 12:28:34.940616       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:28:33 pause-502641 kubelet[3045]: E0127 12:28:33.854833    3045 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-502641\" not found" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.730061    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.731394    3045 apiserver.go:52] "Watching apiserver"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.828855    3045 kubelet_node_status.go:125] "Node was previously registered" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829126    3045 kubelet_node_status.go:79] "Successfully registered node" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829211    3045 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829585    3045 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.830276    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.851513    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.851830    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.887040    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-502641\" already exists" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.887075    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.893394    3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ba3c737-544f-4e4d-9de2-9f0180e87605-xtables-lock\") pod \"kube-proxy-d2qrr\" (UID: \"2ba3c737-544f-4e4d-9de2-9f0180e87605\") " pod="kube-system/kube-proxy-d2qrr"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.893568    3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ba3c737-544f-4e4d-9de2-9f0180e87605-lib-modules\") pod \"kube-proxy-d2qrr\" (UID: \"2ba3c737-544f-4e4d-9de2-9f0180e87605\") " pod="kube-system/kube-proxy-d2qrr"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.901288    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-502641\" already exists" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.901749    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-502641\" already exists" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.924828    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-502641\" already exists" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.924895    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.951074    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-502641\" already exists" pod="kube-system/kube-controller-manager-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.951107    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.964429    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-502641\" already exists" pod="kube-system/kube-scheduler-pause-502641"
	Jan 27 12:28:41 pause-502641 kubelet[3045]: E0127 12:28:41.845540    3045 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980921841919599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:41 pause-502641 kubelet[3045]: E0127 12:28:41.845605    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980921841919599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:51 pause-502641 kubelet[3045]: E0127 12:28:51.847079    3045 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980931846514058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:51 pause-502641 kubelet[3045]: E0127 12:28:51.847323    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980931846514058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-502641 -n pause-502641
helpers_test.go:261: (dbg) Run:  kubectl --context pause-502641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-502641 -n pause-502641
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-502641 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-502641 logs -n 25: (1.219295476s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat              | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo cat              | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo                  | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo find             | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-956477 sudo crio             | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-956477                       | cilium-956477             | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	| start   | -p cert-options-324519                 | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:27 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-980891 ssh cat      | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-980891           | force-systemd-flag-980891 | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	| start   | -p pause-502641 --memory=2048          | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:28 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-324519 ssh                | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-324519 -- sudo         | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-324519                 | cert-options-324519       | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p old-k8s-version-488586              | old-k8s-version-488586    | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-502641                        | pause-502641              | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-029294           | kubernetes-upgrade-029294 | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:28 UTC |
	| start   | -p no-preload-472479                   | no-preload-472479         | jenkins | v1.35.0 | 27 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:28:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:28:39.996015 1772097 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:28:39.996240 1772097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:39.996249 1772097 out.go:358] Setting ErrFile to fd 2...
	I0127 12:28:39.996253 1772097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:39.996401 1772097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:28:39.996967 1772097 out.go:352] Setting JSON to false
	I0127 12:28:39.997934 1772097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33061,"bootTime":1737947859,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:28:39.998031 1772097 start.go:139] virtualization: kvm guest
	I0127 12:28:39.999875 1772097 out.go:177] * [no-preload-472479] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:28:40.001182 1772097 notify.go:220] Checking for updates...
	I0127 12:28:40.001194 1772097 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:28:40.002265 1772097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:28:40.003441 1772097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:40.004588 1772097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.005723 1772097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:28:40.006883 1772097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:28:40.008443 1772097 config.go:182] Loaded profile config "cert-expiration-103712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:40.008559 1772097 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:28:40.008677 1772097 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:40.008769 1772097 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:28:40.043279 1772097 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:28:40.044338 1772097 start.go:297] selected driver: kvm2
	I0127 12:28:40.044355 1772097 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:28:40.044365 1772097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:28:40.045031 1772097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.045116 1772097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:28:40.059515 1772097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:28:40.059556 1772097 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:28:40.059789 1772097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:28:40.059817 1772097 cni.go:84] Creating CNI manager for ""
	I0127 12:28:40.059858 1772097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:28:40.059867 1772097 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:28:40.059912 1772097 start.go:340] cluster config:
	{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0127 12:28:40.060008 1772097 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.061421 1772097 out.go:177] * Starting "no-preload-472479" primary control-plane node in "no-preload-472479" cluster
	I0127 12:28:40.062498 1772097 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:28:40.062642 1772097 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json ...
	I0127 12:28:40.062674 1772097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json: {Name:mkedfbfbfe1ebe6cb6a7a447dd39f5cbd4480c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:40.062769 1772097 cache.go:107] acquiring lock: {Name:mkb25515b3b95c5192227a9f8b73580df8690d67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062786 1772097 cache.go:107] acquiring lock: {Name:mk639f71a3608ebd880c09c6f4eb9a539098cf11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062791 1772097 cache.go:107] acquiring lock: {Name:mkda62c534daf9b50eef3a3b72d1af9f7ff250f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062820 1772097 cache.go:107] acquiring lock: {Name:mked314d62a39ef0534a0d0db17e6c54c2b2c2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062902 1772097 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 12:28:40.062951 1772097 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.744µs
	I0127 12:28:40.062996 1772097 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 12:28:40.062994 1772097 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:28:40.063014 1772097 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:28:40.063013 1772097 cache.go:107] acquiring lock: {Name:mk0ad24c2418ae07d65df52baee7ca3e4777ce5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.063013 1772097 cache.go:107] acquiring lock: {Name:mk9b4a8e0176725482a193dc85ee9e3de8f76e70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062897 1772097 cache.go:107] acquiring lock: {Name:mk7d3e8c31e3028ac530b433216d6548161f2b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.062894 1772097 start.go:360] acquireMachinesLock for no-preload-472479: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:28:40.063204 1772097 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:28:40.063224 1772097 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:28:40.062969 1772097 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 12:28:40.063280 1772097 start.go:364] duration metric: took 68.159µs to acquireMachinesLock for "no-preload-472479"
	I0127 12:28:40.063300 1772097 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:28:40.062940 1772097 cache.go:107] acquiring lock: {Name:mk7e91ce66d7bc99a7dd43c311bf67c378549dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:28:40.063318 1772097 start.go:93] Provisioning new machine with config: &{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:28:40.063445 1772097 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:28:40.063451 1772097 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:28:38.608071 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:41.107284 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:40.064193 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:28:40.064261 1772097 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:28:40.064263 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:28:40.064263 1772097 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:28:40.064340 1772097 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 12:28:40.064196 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:28:40.064594 1772097 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:28:40.065117 1772097 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:28:40.065317 1772097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:28:40.065362 1772097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:40.080504 1772097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0127 12:28:40.080932 1772097 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:40.081448 1772097 main.go:141] libmachine: Using API Version  1
	I0127 12:28:40.081469 1772097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:40.081779 1772097 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:40.081962 1772097 main.go:141] libmachine: (no-preload-472479) Calling .GetMachineName
	I0127 12:28:40.082120 1772097 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:28:40.082305 1772097 start.go:159] libmachine.API.Create for "no-preload-472479" (driver="kvm2")
	I0127 12:28:40.082341 1772097 client.go:168] LocalClient.Create starting
	I0127 12:28:40.082382 1772097 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:28:40.082419 1772097 main.go:141] libmachine: Decoding PEM data...
	I0127 12:28:40.082442 1772097 main.go:141] libmachine: Parsing certificate...
	I0127 12:28:40.082509 1772097 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:28:40.082534 1772097 main.go:141] libmachine: Decoding PEM data...
	I0127 12:28:40.082554 1772097 main.go:141] libmachine: Parsing certificate...
	I0127 12:28:40.082578 1772097 main.go:141] libmachine: Running pre-create checks...
	I0127 12:28:40.082596 1772097 main.go:141] libmachine: (no-preload-472479) Calling .PreCreateCheck
	I0127 12:28:40.082948 1772097 main.go:141] libmachine: (no-preload-472479) Calling .GetConfigRaw
	I0127 12:28:40.083325 1772097 main.go:141] libmachine: Creating machine...
	I0127 12:28:40.083338 1772097 main.go:141] libmachine: (no-preload-472479) Calling .Create
	I0127 12:28:40.083474 1772097 main.go:141] libmachine: (no-preload-472479) creating KVM machine...
	I0127 12:28:40.083485 1772097 main.go:141] libmachine: (no-preload-472479) creating network...
	I0127 12:28:40.084732 1772097 main.go:141] libmachine: (no-preload-472479) DBG | found existing default KVM network
	I0127 12:28:40.086558 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.086416 1772120 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:28:40.088029 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.087940 1772120 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003561d0}
	I0127 12:28:40.088051 1772097 main.go:141] libmachine: (no-preload-472479) DBG | created network xml: 
	I0127 12:28:40.088063 1772097 main.go:141] libmachine: (no-preload-472479) DBG | <network>
	I0127 12:28:40.088071 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <name>mk-no-preload-472479</name>
	I0127 12:28:40.088081 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <dns enable='no'/>
	I0127 12:28:40.088092 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   
	I0127 12:28:40.088102 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 12:28:40.088111 1772097 main.go:141] libmachine: (no-preload-472479) DBG |     <dhcp>
	I0127 12:28:40.088126 1772097 main.go:141] libmachine: (no-preload-472479) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 12:28:40.088135 1772097 main.go:141] libmachine: (no-preload-472479) DBG |     </dhcp>
	I0127 12:28:40.088157 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   </ip>
	I0127 12:28:40.088176 1772097 main.go:141] libmachine: (no-preload-472479) DBG |   
	I0127 12:28:40.088182 1772097 main.go:141] libmachine: (no-preload-472479) DBG | </network>
	I0127 12:28:40.088190 1772097 main.go:141] libmachine: (no-preload-472479) DBG | 
	I0127 12:28:40.092942 1772097 main.go:141] libmachine: (no-preload-472479) DBG | trying to create private KVM network mk-no-preload-472479 192.168.50.0/24...
	I0127 12:28:40.167283 1772097 main.go:141] libmachine: (no-preload-472479) DBG | private KVM network mk-no-preload-472479 192.168.50.0/24 created
	I0127 12:28:40.167331 1772097 main.go:141] libmachine: (no-preload-472479) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 ...
	I0127 12:28:40.167351 1772097 main.go:141] libmachine: (no-preload-472479) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:28:40.167453 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.167366 1772120 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.167606 1772097 main.go:141] libmachine: (no-preload-472479) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:28:40.274148 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 12:28:40.281697 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0127 12:28:40.294774 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 12:28:40.295327 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 12:28:40.307335 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 12:28:40.311571 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 12:28:40.329481 1772097 cache.go:162] opening:  /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 12:28:40.418130 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 12:28:40.418158 1772097 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 355.372631ms
	I0127 12:28:40.418177 1772097 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 12:28:40.454647 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.454533 1772120 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa...
	I0127 12:28:40.652731 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.652597 1772120 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/no-preload-472479.rawdisk...
	I0127 12:28:40.652755 1772097 main.go:141] libmachine: (no-preload-472479) DBG | Writing magic tar header
	I0127 12:28:40.652795 1772097 main.go:141] libmachine: (no-preload-472479) DBG | Writing SSH key tar header
	I0127 12:28:40.652803 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:40.652709 1772120 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 ...
	I0127 12:28:40.652814 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479
	I0127 12:28:40.652929 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:28:40.652960 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:28:40.652974 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479 (perms=drwx------)
	I0127 12:28:40.652988 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:28:40.653004 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:28:40.653015 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:28:40.653028 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home/jenkins
	I0127 12:28:40.653040 1772097 main.go:141] libmachine: (no-preload-472479) DBG | checking permissions on dir: /home
	I0127 12:28:40.653054 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:28:40.653063 1772097 main.go:141] libmachine: (no-preload-472479) DBG | skipping /home - not owner
	I0127 12:28:40.653082 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:28:40.653095 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:28:40.653108 1772097 main.go:141] libmachine: (no-preload-472479) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:28:40.653117 1772097 main.go:141] libmachine: (no-preload-472479) creating domain...
	I0127 12:28:40.654183 1772097 main.go:141] libmachine: (no-preload-472479) define libvirt domain using xml: 
	I0127 12:28:40.654205 1772097 main.go:141] libmachine: (no-preload-472479) <domain type='kvm'>
	I0127 12:28:40.654215 1772097 main.go:141] libmachine: (no-preload-472479)   <name>no-preload-472479</name>
	I0127 12:28:40.654222 1772097 main.go:141] libmachine: (no-preload-472479)   <memory unit='MiB'>2200</memory>
	I0127 12:28:40.654231 1772097 main.go:141] libmachine: (no-preload-472479)   <vcpu>2</vcpu>
	I0127 12:28:40.654240 1772097 main.go:141] libmachine: (no-preload-472479)   <features>
	I0127 12:28:40.654252 1772097 main.go:141] libmachine: (no-preload-472479)     <acpi/>
	I0127 12:28:40.654263 1772097 main.go:141] libmachine: (no-preload-472479)     <apic/>
	I0127 12:28:40.654276 1772097 main.go:141] libmachine: (no-preload-472479)     <pae/>
	I0127 12:28:40.654305 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654325 1772097 main.go:141] libmachine: (no-preload-472479)   </features>
	I0127 12:28:40.654336 1772097 main.go:141] libmachine: (no-preload-472479)   <cpu mode='host-passthrough'>
	I0127 12:28:40.654347 1772097 main.go:141] libmachine: (no-preload-472479)   
	I0127 12:28:40.654361 1772097 main.go:141] libmachine: (no-preload-472479)   </cpu>
	I0127 12:28:40.654372 1772097 main.go:141] libmachine: (no-preload-472479)   <os>
	I0127 12:28:40.654382 1772097 main.go:141] libmachine: (no-preload-472479)     <type>hvm</type>
	I0127 12:28:40.654393 1772097 main.go:141] libmachine: (no-preload-472479)     <boot dev='cdrom'/>
	I0127 12:28:40.654404 1772097 main.go:141] libmachine: (no-preload-472479)     <boot dev='hd'/>
	I0127 12:28:40.654419 1772097 main.go:141] libmachine: (no-preload-472479)     <bootmenu enable='no'/>
	I0127 12:28:40.654430 1772097 main.go:141] libmachine: (no-preload-472479)   </os>
	I0127 12:28:40.654437 1772097 main.go:141] libmachine: (no-preload-472479)   <devices>
	I0127 12:28:40.654451 1772097 main.go:141] libmachine: (no-preload-472479)     <disk type='file' device='cdrom'>
	I0127 12:28:40.654467 1772097 main.go:141] libmachine: (no-preload-472479)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/boot2docker.iso'/>
	I0127 12:28:40.654480 1772097 main.go:141] libmachine: (no-preload-472479)       <target dev='hdc' bus='scsi'/>
	I0127 12:28:40.654493 1772097 main.go:141] libmachine: (no-preload-472479)       <readonly/>
	I0127 12:28:40.654505 1772097 main.go:141] libmachine: (no-preload-472479)     </disk>
	I0127 12:28:40.654516 1772097 main.go:141] libmachine: (no-preload-472479)     <disk type='file' device='disk'>
	I0127 12:28:40.654529 1772097 main.go:141] libmachine: (no-preload-472479)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:28:40.654543 1772097 main.go:141] libmachine: (no-preload-472479)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/no-preload-472479.rawdisk'/>
	I0127 12:28:40.654554 1772097 main.go:141] libmachine: (no-preload-472479)       <target dev='hda' bus='virtio'/>
	I0127 12:28:40.654567 1772097 main.go:141] libmachine: (no-preload-472479)     </disk>
	I0127 12:28:40.654581 1772097 main.go:141] libmachine: (no-preload-472479)     <interface type='network'>
	I0127 12:28:40.654594 1772097 main.go:141] libmachine: (no-preload-472479)       <source network='mk-no-preload-472479'/>
	I0127 12:28:40.654605 1772097 main.go:141] libmachine: (no-preload-472479)       <model type='virtio'/>
	I0127 12:28:40.654618 1772097 main.go:141] libmachine: (no-preload-472479)     </interface>
	I0127 12:28:40.654628 1772097 main.go:141] libmachine: (no-preload-472479)     <interface type='network'>
	I0127 12:28:40.654640 1772097 main.go:141] libmachine: (no-preload-472479)       <source network='default'/>
	I0127 12:28:40.654655 1772097 main.go:141] libmachine: (no-preload-472479)       <model type='virtio'/>
	I0127 12:28:40.654667 1772097 main.go:141] libmachine: (no-preload-472479)     </interface>
	I0127 12:28:40.654678 1772097 main.go:141] libmachine: (no-preload-472479)     <serial type='pty'>
	I0127 12:28:40.654690 1772097 main.go:141] libmachine: (no-preload-472479)       <target port='0'/>
	I0127 12:28:40.654701 1772097 main.go:141] libmachine: (no-preload-472479)     </serial>
	I0127 12:28:40.654713 1772097 main.go:141] libmachine: (no-preload-472479)     <console type='pty'>
	I0127 12:28:40.654728 1772097 main.go:141] libmachine: (no-preload-472479)       <target type='serial' port='0'/>
	I0127 12:28:40.654739 1772097 main.go:141] libmachine: (no-preload-472479)     </console>
	I0127 12:28:40.654759 1772097 main.go:141] libmachine: (no-preload-472479)     <rng model='virtio'>
	I0127 12:28:40.654772 1772097 main.go:141] libmachine: (no-preload-472479)       <backend model='random'>/dev/random</backend>
	I0127 12:28:40.654785 1772097 main.go:141] libmachine: (no-preload-472479)     </rng>
	I0127 12:28:40.654793 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654801 1772097 main.go:141] libmachine: (no-preload-472479)     
	I0127 12:28:40.654811 1772097 main.go:141] libmachine: (no-preload-472479)   </devices>
	I0127 12:28:40.654820 1772097 main.go:141] libmachine: (no-preload-472479) </domain>
	I0127 12:28:40.654829 1772097 main.go:141] libmachine: (no-preload-472479) 
	I0127 12:28:40.659135 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:25:1b:94 in network default
	I0127 12:28:40.659790 1772097 main.go:141] libmachine: (no-preload-472479) starting domain...
	I0127 12:28:40.659807 1772097 main.go:141] libmachine: (no-preload-472479) ensuring networks are active...
	I0127 12:28:40.659815 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:40.660712 1772097 main.go:141] libmachine: (no-preload-472479) Ensuring network default is active
	I0127 12:28:40.661077 1772097 main.go:141] libmachine: (no-preload-472479) Ensuring network mk-no-preload-472479 is active
	I0127 12:28:40.661602 1772097 main.go:141] libmachine: (no-preload-472479) getting domain XML...
	I0127 12:28:40.662545 1772097 main.go:141] libmachine: (no-preload-472479) creating domain...
	I0127 12:28:40.804131 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 12:28:40.804161 1772097 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 741.199308ms
	I0127 12:28:40.804175 1772097 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 12:28:41.983370 1772097 main.go:141] libmachine: (no-preload-472479) waiting for IP...
	I0127 12:28:41.984407 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:41.985311 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:41.985368 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:41.984974 1772120 retry.go:31] will retry after 298.567672ms: waiting for domain to come up
	I0127 12:28:41.997119 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 12:28:41.997152 1772097 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 1.934385794s
	I0127 12:28:41.997168 1772097 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 12:28:42.052043 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 12:28:42.052075 1772097 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 1.989185759s
	I0127 12:28:42.052092 1772097 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 12:28:42.119549 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 12:28:42.119572 1772097 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.056679001s
	I0127 12:28:42.119584 1772097 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 12:28:42.175623 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 12:28:42.175657 1772097 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 2.112922654s
	I0127 12:28:42.175670 1772097 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 12:28:42.285760 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.286350 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.286377 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.286315 1772120 retry.go:31] will retry after 291.81021ms: waiting for domain to come up
	I0127 12:28:42.396687 1772097 cache.go:157] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 12:28:42.396719 1772097 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.333749498s
	I0127 12:28:42.396733 1772097 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 12:28:42.396753 1772097 cache.go:87] Successfully saved all images to host disk.
	I0127 12:28:42.579831 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.580411 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.580434 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.580375 1772120 retry.go:31] will retry after 353.939815ms: waiting for domain to come up
	I0127 12:28:42.935963 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:42.936536 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:42.936566 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:42.936470 1772120 retry.go:31] will retry after 481.22611ms: waiting for domain to come up
	I0127 12:28:43.419185 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:43.419672 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:43.419701 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:43.419641 1772120 retry.go:31] will retry after 732.731082ms: waiting for domain to come up
	I0127 12:28:44.153554 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:44.154045 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:44.154097 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:44.154017 1772120 retry.go:31] will retry after 939.503013ms: waiting for domain to come up
	I0127 12:28:42.233510 1770976 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:28:42.233816 1770976 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:28:43.608133 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:45.608511 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:48.108087 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:45.094815 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:45.095297 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:45.095366 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:45.095297 1772120 retry.go:31] will retry after 1.113065701s: waiting for domain to come up
	I0127 12:28:46.210655 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:46.211204 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:46.211237 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:46.211156 1772120 retry.go:31] will retry after 1.043624254s: waiting for domain to come up
	I0127 12:28:47.256190 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:47.256623 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:47.256693 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:47.256615 1772120 retry.go:31] will retry after 1.732212198s: waiting for domain to come up
	I0127 12:28:48.990952 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:48.991464 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:48.991493 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:48.991430 1772120 retry.go:31] will retry after 1.408754908s: waiting for domain to come up
	I0127 12:28:50.108748 1771581 pod_ready.go:103] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"False"
	I0127 12:28:52.107149 1771581 pod_ready.go:93] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.107172 1771581 pod_ready.go:82] duration metric: took 15.505827552s for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.107182 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.110955 1771581 pod_ready.go:93] pod "kube-apiserver-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.110973 1771581 pod_ready.go:82] duration metric: took 3.784544ms for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.110982 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.115587 1771581 pod_ready.go:93] pod "kube-controller-manager-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.115603 1771581 pod_ready.go:82] duration metric: took 4.614741ms for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.115612 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.121259 1771581 pod_ready.go:93] pod "kube-proxy-d2qrr" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.121277 1771581 pod_ready.go:82] duration metric: took 5.659419ms for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.121286 1771581 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.124700 1771581 pod_ready.go:93] pod "kube-scheduler-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.124718 1771581 pod_ready.go:82] duration metric: took 3.424354ms for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.124727 1771581 pod_ready.go:39] duration metric: took 15.534483199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:52.124748 1771581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:28:52.136119 1771581 ops.go:34] apiserver oom_adj: -16
	I0127 12:28:52.136139 1771581 kubeadm.go:597] duration metric: took 33.809616415s to restartPrimaryControlPlane
	I0127 12:28:52.136148 1771581 kubeadm.go:394] duration metric: took 34.185988527s to StartCluster
	I0127 12:28:52.136169 1771581 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:52.136252 1771581 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:28:52.137501 1771581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:28:52.137754 1771581 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:28:52.137883 1771581 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:28:52.137973 1771581 config.go:182] Loaded profile config "pause-502641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:28:52.139357 1771581 out.go:177] * Verifying Kubernetes components...
	I0127 12:28:52.140049 1771581 out.go:177] * Enabled addons: 
	I0127 12:28:52.140655 1771581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:28:52.141351 1771581 addons.go:514] duration metric: took 3.484373ms for enable addons: enabled=[]
	I0127 12:28:52.292597 1771581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:28:52.306890 1771581 node_ready.go:35] waiting up to 6m0s for node "pause-502641" to be "Ready" ...
	I0127 12:28:52.309442 1771581 node_ready.go:49] node "pause-502641" has status "Ready":"True"
	I0127 12:28:52.309468 1771581 node_ready.go:38] duration metric: took 2.532758ms for node "pause-502641" to be "Ready" ...
	I0127 12:28:52.309478 1771581 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:52.507703 1771581 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.905022 1771581 pod_ready.go:93] pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:52.905047 1771581 pod_ready.go:82] duration metric: took 397.305948ms for pod "coredns-668d6bf9bc-qrwg2" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:52.905057 1771581 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:50.402066 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:50.402605 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:50.402642 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:50.402555 1772120 retry.go:31] will retry after 1.870396592s: waiting for domain to come up
	I0127 12:28:52.274269 1772097 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:28:52.274653 1772097 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:28:52.274696 1772097 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:28:52.274626 1772120 retry.go:31] will retry after 2.848763778s: waiting for domain to come up
	I0127 12:28:53.304682 1771581 pod_ready.go:93] pod "etcd-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:53.304706 1771581 pod_ready.go:82] duration metric: took 399.642211ms for pod "etcd-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.304716 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.704850 1771581 pod_ready.go:93] pod "kube-apiserver-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:53.704877 1771581 pod_ready.go:82] duration metric: took 400.154008ms for pod "kube-apiserver-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:53.704888 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.104741 1771581 pod_ready.go:93] pod "kube-controller-manager-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.104768 1771581 pod_ready.go:82] duration metric: took 399.873704ms for pod "kube-controller-manager-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.104778 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.504698 1771581 pod_ready.go:93] pod "kube-proxy-d2qrr" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.504723 1771581 pod_ready.go:82] duration metric: took 399.936176ms for pod "kube-proxy-d2qrr" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.504734 1771581 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.904914 1771581 pod_ready.go:93] pod "kube-scheduler-pause-502641" in "kube-system" namespace has status "Ready":"True"
	I0127 12:28:54.904942 1771581 pod_ready.go:82] duration metric: took 400.201404ms for pod "kube-scheduler-pause-502641" in "kube-system" namespace to be "Ready" ...
	I0127 12:28:54.904950 1771581 pod_ready.go:39] duration metric: took 2.595461317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:28:54.904967 1771581 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:28:54.905017 1771581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:28:54.920191 1771581 api_server.go:72] duration metric: took 2.782401263s to wait for apiserver process to appear ...
	I0127 12:28:54.920220 1771581 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:28:54.920236 1771581 api_server.go:253] Checking apiserver healthz at https://192.168.83.90:8443/healthz ...
	I0127 12:28:54.927166 1771581 api_server.go:279] https://192.168.83.90:8443/healthz returned 200:
	ok
	I0127 12:28:54.928174 1771581 api_server.go:141] control plane version: v1.32.1
	I0127 12:28:54.928197 1771581 api_server.go:131] duration metric: took 7.971867ms to wait for apiserver health ...
	I0127 12:28:54.928207 1771581 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:28:55.107795 1771581 system_pods.go:59] 6 kube-system pods found
	I0127 12:28:55.107829 1771581 system_pods.go:61] "coredns-668d6bf9bc-qrwg2" [1b76c781-b24e-464f-a27f-95295231951e] Running
	I0127 12:28:55.107836 1771581 system_pods.go:61] "etcd-pause-502641" [90f44ece-1535-4998-9248-d8fa48eaabc4] Running
	I0127 12:28:55.107841 1771581 system_pods.go:61] "kube-apiserver-pause-502641" [fe317602-9c59-4bb0-a26a-743c2ec3bfac] Running
	I0127 12:28:55.107846 1771581 system_pods.go:61] "kube-controller-manager-pause-502641" [e634f881-2616-46fb-9853-7d45aea66aab] Running
	I0127 12:28:55.107852 1771581 system_pods.go:61] "kube-proxy-d2qrr" [2ba3c737-544f-4e4d-9de2-9f0180e87605] Running
	I0127 12:28:55.107857 1771581 system_pods.go:61] "kube-scheduler-pause-502641" [5847e715-27f8-43d2-8591-0a16effc0680] Running
	I0127 12:28:55.107865 1771581 system_pods.go:74] duration metric: took 179.650814ms to wait for pod list to return data ...
	I0127 12:28:55.107874 1771581 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:28:55.305204 1771581 default_sa.go:45] found service account: "default"
	I0127 12:28:55.305239 1771581 default_sa.go:55] duration metric: took 197.356985ms for default service account to be created ...
	I0127 12:28:55.305252 1771581 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:28:55.506556 1771581 system_pods.go:87] 6 kube-system pods found
	I0127 12:28:55.705570 1771581 system_pods.go:105] "coredns-668d6bf9bc-qrwg2" [1b76c781-b24e-464f-a27f-95295231951e] Running
	I0127 12:28:55.705594 1771581 system_pods.go:105] "etcd-pause-502641" [90f44ece-1535-4998-9248-d8fa48eaabc4] Running
	I0127 12:28:55.705601 1771581 system_pods.go:105] "kube-apiserver-pause-502641" [fe317602-9c59-4bb0-a26a-743c2ec3bfac] Running
	I0127 12:28:55.705606 1771581 system_pods.go:105] "kube-controller-manager-pause-502641" [e634f881-2616-46fb-9853-7d45aea66aab] Running
	I0127 12:28:55.705611 1771581 system_pods.go:105] "kube-proxy-d2qrr" [2ba3c737-544f-4e4d-9de2-9f0180e87605] Running
	I0127 12:28:55.705615 1771581 system_pods.go:105] "kube-scheduler-pause-502641" [5847e715-27f8-43d2-8591-0a16effc0680] Running
	I0127 12:28:55.705623 1771581 system_pods.go:147] duration metric: took 400.364037ms to wait for k8s-apps to be running ...
	I0127 12:28:55.705632 1771581 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:28:55.705681 1771581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:55.719702 1771581 system_svc.go:56] duration metric: took 14.059568ms WaitForService to wait for kubelet
	I0127 12:28:55.719732 1771581 kubeadm.go:582] duration metric: took 3.581948245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:28:55.719751 1771581 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:28:55.905681 1771581 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:28:55.905710 1771581 node_conditions.go:123] node cpu capacity is 2
	I0127 12:28:55.905723 1771581 node_conditions.go:105] duration metric: took 185.967606ms to run NodePressure ...
	I0127 12:28:55.905739 1771581 start.go:241] waiting for startup goroutines ...
	I0127 12:28:55.905748 1771581 start.go:246] waiting for cluster config update ...
	I0127 12:28:55.905757 1771581 start.go:255] writing updated cluster config ...
	I0127 12:28:55.906071 1771581 ssh_runner.go:195] Run: rm -f paused
	I0127 12:28:55.956952 1771581 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:28:55.958486 1771581 out.go:177] * Done! kubectl is now configured to use "pause-502641" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.390518388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980938390488254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89d5c773-1539-47cf-a5e5-29647da334ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.391028499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f53a5db6-8593-46df-b63b-12fa08b2da16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.391100450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f53a5db6-8593-46df-b63b-12fa08b2da16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.391366954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f53a5db6-8593-46df-b63b-12fa08b2da16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.433663968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b7d25bc-4960-4a5f-b012-9be4765a29c4 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.433781058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b7d25bc-4960-4a5f-b012-9be4765a29c4 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.435179563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5369aab-bde9-40cd-803d-0d2574432fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.435505485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980938435488525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5369aab-bde9-40cd-803d-0d2574432fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.436026212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28f91081-5ab1-444d-8b32-0993d9ad7dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.436090811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28f91081-5ab1-444d-8b32-0993d9ad7dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.436336469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28f91081-5ab1-444d-8b32-0993d9ad7dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.470619599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd1ed885-59de-447f-8f5b-9ad28452030f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.470695643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd1ed885-59de-447f-8f5b-9ad28452030f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.472470142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=485c6be8-c0cc-486d-ab46-b6617d067dfa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.472829491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980938472809770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=485c6be8-c0cc-486d-ab46-b6617d067dfa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.473399920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4325a71f-66ce-4808-bcad-76bd66146374 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.473451812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4325a71f-66ce-4808-bcad-76bd66146374 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.473698642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4325a71f-66ce-4808-bcad-76bd66146374 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.512918824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f2448d9-a77f-4c8d-8ecb-0d023b2b44c3 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.512994285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f2448d9-a77f-4c8d-8ecb-0d023b2b44c3 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.513848447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34307126-2325-4ebf-8ea9-65d4be7281c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.514396433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980938514371779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34307126-2325-4ebf-8ea9-65d4be7281c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.514858192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89d8dd3f-531d-4977-a503-864c68a1971a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.514963103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89d8dd3f-531d-4977-a503-864c68a1971a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:28:58 pause-502641 crio[2124]: time="2025-01-27 12:28:58.515203253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737980912198641526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737980912218477599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737980912203153939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737980912191214484,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658,PodSandboxId:af08d08a4f4b28f34c014968566d9f3198b3567d09b73a491002abfd54900731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980898898535751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342,PodSandboxId:f6d9111dcb33b135e4a44c2409ab923e30dba405d90c29bf17efbe8ef53c0c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980898120376448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.
kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93,PodSandboxId:7831575bc0ac469aea62dcd66711339a3f2265bb9789ca7111d434f3319c066a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737980898185985203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e112df066f56963b26718b9af39bd9af,},Annotations:map[string]string{io.kubernetes.containe
r.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383,PodSandboxId:d06bdda67bf3c79720bb7a14cc5a86e7a30ab44dbc0e91111e9719b185d18972,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737980898209771080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cccb33adea1f1d17b38799578e90268c,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846,PodSandboxId:28f5f35deee13faaec1049f0b490ba41dba3b65b51d7b82a3ae66533c10e6461,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737980898211768306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9240001f93fca0ae79dd77a451be1f17,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0,PodSandboxId:96b1dcfa14c236c58d343113e10150ef63c604a279ca5a41fde4d3f20547eab2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737980898060262083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-502641,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799dfa727fce35e9ff2a435d6c72826,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195,PodSandboxId:83733cedb10202af53f2a001510340e720cc2c9a1802aa10fb250d979f637a26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737980852098744728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qrwg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b76c781-b24e-464f-a27f-95295231951e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f,PodSandboxId:15f8030c6915f54ec759231d0c9addae4daa99839c275a6fbf51c994f7e333a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737980851679856790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2qrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2ba3c737-544f-4e4d-9de2-9f0180e87605,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89d8dd3f-531d-4977-a503-864c68a1971a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f994ab9d0c9cc       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   26 seconds ago       Running             kube-controller-manager   2                   d06bdda67bf3c       kube-controller-manager-pause-502641
	e6f03254be7fc       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   26 seconds ago       Running             kube-scheduler            2                   96b1dcfa14c23       kube-scheduler-pause-502641
	a7a9b4eb87053       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   26 seconds ago       Running             etcd                      2                   28f5f35deee13       etcd-pause-502641
	42d60156090c2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   26 seconds ago       Running             kube-apiserver            2                   7831575bc0ac4       kube-apiserver-pause-502641
	bc9ea111c52fa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   39 seconds ago       Running             coredns                   1                   af08d08a4f4b2       coredns-668d6bf9bc-qrwg2
	f49dcc6de8861       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   40 seconds ago       Exited              etcd                      1                   28f5f35deee13       etcd-pause-502641
	80632174b7df0       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   40 seconds ago       Exited              kube-controller-manager   1                   d06bdda67bf3c       kube-controller-manager-pause-502641
	cdc12a65d7653       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   40 seconds ago       Exited              kube-apiserver            1                   7831575bc0ac4       kube-apiserver-pause-502641
	62fb6b61bb5ab       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   40 seconds ago       Running             kube-proxy                1                   f6d9111dcb33b       kube-proxy-d2qrr
	5f2e731b7a199       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   40 seconds ago       Exited              kube-scheduler            1                   96b1dcfa14c23       kube-scheduler-pause-502641
	e2852b5fafd3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   83733cedb1020       coredns-668d6bf9bc-qrwg2
	e9b1df0076dd8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   About a minute ago   Exited              kube-proxy                0                   15f8030c6915f       kube-proxy-d2qrr
	
	
	==> coredns [bc9ea111c52fa5209b7c8fd5ca9c006dca81b359ef213e3e8f985e6882812658] <==
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43524 - 13320 "HINFO IN 125212257908539037.1140348419135132907. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006846401s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2101664387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.215) (total time: 10786ms):
	Trace[2101664387]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer 10786ms (12:28:30.001)
	Trace[2101664387]: [10.78613414s] [10.78613414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1106148870]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.215) (total time: 10787ms):
	Trace[1106148870]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer 10787ms (12:28:30.002)
	Trace[1106148870]: [10.787302327s] [10.787302327s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39320->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1620791708]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:28:19.214) (total time: 10787ms):
	Trace[1620791708]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer 10787ms (12:28:30.002)
	Trace[1620791708]: [10.78798866s] [10.78798866s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:39312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [e2852b5fafd3d9d698190468349c189ff20dea9b0adfdcaca272c7e307004195] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59119 - 54153 "HINFO IN 2493060570586819184.7280973419994849227. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011331475s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1471642220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.299) (total time: 30001ms):
	Trace[1471642220]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:28:02.301)
	Trace[1471642220]: [30.001857467s] [30.001857467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1296202342]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.301) (total time: 30000ms):
	Trace[1296202342]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:28:02.302)
	Trace[1296202342]: [30.000662008s] [30.000662008s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[806502041]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:27:32.299) (total time: 30002ms):
	Trace[806502041]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:28:02.302)
	Trace[806502041]: [30.00251854s] [30.00251854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-502641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-502641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=pause-502641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_27_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-502641
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:28:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:28:34 +0000   Mon, 27 Jan 2025 12:27:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.90
	  Hostname:    pause-502641
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7b9d39c50b34d4fa1590cfc705a9046
	  System UUID:                a7b9d39c-50b3-4d4f-a159-0cfc705a9046
	  Boot ID:                    99446827-dbec-4287-9ca9-be203fb13ea4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qrwg2                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     87s
	  kube-system                 etcd-pause-502641                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         92s
	  kube-system                 kube-apiserver-pause-502641             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-502641    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-d2qrr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-502641             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeReady                91s                kubelet          Node pause-502641 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node pause-502641 event: Registered Node pause-502641 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-502641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-502641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-502641 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21s                node-controller  Node pause-502641 event: Registered Node pause-502641 in Controller
	
	
	==> dmesg <==
	[  +8.677579] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.053419] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065638] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.160761] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.153929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279561] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.002254] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.490980] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.057380] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994965] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.086003] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.819652] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.235740] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 12:28] kauditd_printk_skb: 50 callbacks suppressed
	[  +8.860927] systemd-fstab-generator[2048]: Ignoring "noauto" option for root device
	[  +0.138973] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +0.165010] systemd-fstab-generator[2075]: Ignoring "noauto" option for root device
	[  +0.137114] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.271873] systemd-fstab-generator[2115]: Ignoring "noauto" option for root device
	[  +1.277517] systemd-fstab-generator[2238]: Ignoring "noauto" option for root device
	[  +9.371424] kauditd_printk_skb: 197 callbacks suppressed
	[  +4.791745] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +0.390967] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.320070] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.990031] systemd-fstab-generator[3380]: Ignoring "noauto" option for root device
	
	
	==> etcd [a7a9b4eb8705368190f950de7b832bc1d06f9ed24f67522469e68d9b146e3a57] <==
	{"level":"info","ts":"2025-01-27T12:28:32.562165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b switched to configuration voters=(16863480061887869707)"}
	{"level":"info","ts":"2025-01-27T12:28:32.562240Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.83.90:2380"}
	{"level":"info","ts":"2025-01-27T12:28:32.565779Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T12:28:32.566738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T12:28:32.569004Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"37b821ace5f4705e","local-member-id":"ea071dc50db5730b","added-peer-id":"ea071dc50db5730b","added-peer-peer-urls":["https://192.168.83.90:2380"]}
	{"level":"info","ts":"2025-01-27T12:28:32.569199Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"37b821ace5f4705e","local-member-id":"ea071dc50db5730b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:28:32.569324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:28:32.569261Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.83.90:2380"}
	{"level":"info","ts":"2025-01-27T12:28:32.569274Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T12:28:33.519022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b received MsgPreVoteResp from ea071dc50db5730b at term 2"}
	{"level":"info","ts":"2025-01-27T12:28:33.519332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b received MsgVoteResp from ea071dc50db5730b at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea071dc50db5730b became leader at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.519496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea071dc50db5730b elected leader ea071dc50db5730b at term 3"}
	{"level":"info","ts":"2025-01-27T12:28:33.524610Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea071dc50db5730b","local-member-attributes":"{Name:pause-502641 ClientURLs:[https://192.168.83.90:2379]}","request-path":"/0/members/ea071dc50db5730b/attributes","cluster-id":"37b821ace5f4705e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T12:28:33.524694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:28:33.525322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:28:33.525989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:28:33.526144Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:28:33.526581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.90:2379"}
	{"level":"info","ts":"2025-01-27T12:28:33.526960Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:28:33.527188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:28:33.527219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [f49dcc6de88610433f7f49beaa04d548d1649174d5d9f5c8aa0674232b462846] <==
	
	
	==> kernel <==
	 12:28:58 up 2 min,  0 users,  load average: 0.31, 0.15, 0.06
	Linux pause-502641 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42d60156090c2ea017d6b9eb8012b56ee31c255ec4b91a98a5f801c570b3ac6c] <==
	I0127 12:28:34.720704       1 policy_source.go:240] refreshing policies
	I0127 12:28:34.722754       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:28:34.724064       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:28:34.724113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:28:34.733719       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:28:34.734102       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:28:34.743450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:28:34.744987       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:28:34.745660       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:28:34.745800       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:28:34.746157       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:28:34.746223       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:28:34.746246       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:28:34.746267       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:28:34.759732       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0127 12:28:34.797249       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 12:28:34.816000       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:28:34.910923       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:28:35.625166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:28:36.431484       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:28:36.481838       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:28:36.514409       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:28:36.522855       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:28:38.137569       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:28:38.188301       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cdc12a65d7653e487c3fd6d5e92a51425a53ef48b02dfc29a646f99aaeccdd93] <==
	W0127 12:28:18.758254       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:28:18.758630       1 options.go:238] external host was not specified, using 192.168.83.90
	I0127 12:28:18.763563       1 server.go:143] Version: v1.32.1
	I0127 12:28:18.763603       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:19.383022       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0127 12:28:19.385075       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:19.385222       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 12:28:19.393836       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:28:19.400793       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:28:19.400824       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:28:19.401094       1 instance.go:233] Using reconciler: lease
	W0127 12:28:19.402979       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.386546       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.386647       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:20.403552       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:21.760233       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:21.872831       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:22.255001       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:24.126014       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:24.404438       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:25.233831       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:27.756957       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:28:28.715483       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [80632174b7df01b7a7d3a3e81a6e328a4a86b810b911af70188e80de74ab7383] <==
	
	
	==> kube-controller-manager [f994ab9d0c9cca639472262f18603ead605777d0268a0b726ce299d5098df69d] <==
	I0127 12:28:37.922949       1 shared_informer.go:320] Caches are synced for node
	I0127 12:28:37.923034       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:28:37.923114       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:28:37.923146       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:28:37.923171       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:28:37.923321       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-502641"
	I0127 12:28:37.926219       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:28:37.931389       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:28:37.931457       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:28:37.933760       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:28:37.933803       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:28:37.933910       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:28:37.935130       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:28:37.935183       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:28:37.935855       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:28:37.935907       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:28:37.942066       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:28:37.943251       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:28:37.944418       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:28:37.945080       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:28:37.965818       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:28:37.973288       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:28:37.978699       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:28:37.978737       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:28:37.978748       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [62fb6b61bb5ab9fa5cc65bdce198999fb3c90cbcfcec4cc83f2a5df0d2f56342] <==
	 >
	E0127 12:28:19.049758       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:28:30.000202       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-502641\": dial tcp 192.168.83.90:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.83.90:57090->192.168.83.90:8443: read: connection reset by peer"
	E0127 12:28:31.170163       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-502641\": dial tcp 192.168.83.90:8443: connect: connection refused"
	I0127 12:28:34.778280       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.90"]
	E0127 12:28:34.778365       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:28:34.862341       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:28:34.862389       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:28:34.862413       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:28:34.867030       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:28:34.867360       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:28:34.867388       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:34.869536       1 config.go:199] "Starting service config controller"
	I0127 12:28:34.869591       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:28:34.869612       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:28:34.869630       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:28:34.872386       1 config.go:329] "Starting node config controller"
	I0127 12:28:34.872416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:28:34.970767       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:28:34.970817       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:28:34.973294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9b1df0076dd8fe0f792802f4334195624c380a5c42f315ba48eac9805e7d57f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:27:32.072301       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:27:32.100806       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.90"]
	E0127 12:27:32.101046       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:27:32.175360       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:27:32.175552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:27:32.175754       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:27:32.182104       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:27:32.185227       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:27:32.185290       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:27:32.188174       1 config.go:199] "Starting service config controller"
	I0127 12:27:32.188748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:27:32.189035       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:27:32.189095       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:27:32.191254       1 config.go:329] "Starting node config controller"
	I0127 12:27:32.191351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:27:32.289792       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:27:32.289859       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:27:32.291500       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f2e731b7a1998a444a35c4c83ec0524edc5ee691ca23c3ff0c43adcac4b54f0] <==
	I0127 12:28:19.682746       1 serving.go:386] Generated self-signed cert in-memory
	W0127 12:28:30.001050       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.83.90:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.83.90:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.83.90:57112->192.168.83.90:8443: read: connection reset by peer
	W0127 12:28:30.001130       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:28:30.001139       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:28:30.013947       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:28:30.013966       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0127 12:28:30.013989       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0127 12:28:30.015962       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0127 12:28:30.016036       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0127 12:28:30.016084       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e6f03254be7fc072cf783d61b7ab141c9fc3389038e5f2997c17e5cde102f373] <==
	I0127 12:28:33.057253       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:28:34.798931       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:28:34.803150       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:28:34.836819       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:28:34.838750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:28:34.838960       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0127 12:28:34.838988       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0127 12:28:34.839330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:28:34.839351       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:28:34.839475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0127 12:28:34.839495       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 12:28:34.940528       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 12:28:34.940602       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0127 12:28:34.940616       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:28:33 pause-502641 kubelet[3045]: E0127 12:28:33.854833    3045 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-502641\" not found" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.730061    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.731394    3045 apiserver.go:52] "Watching apiserver"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.828855    3045 kubelet_node_status.go:125] "Node was previously registered" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829126    3045 kubelet_node_status.go:79] "Successfully registered node" node="pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829211    3045 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.829585    3045 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.830276    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.851513    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.851830    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.887040    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-502641\" already exists" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.887075    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.893394    3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ba3c737-544f-4e4d-9de2-9f0180e87605-xtables-lock\") pod \"kube-proxy-d2qrr\" (UID: \"2ba3c737-544f-4e4d-9de2-9f0180e87605\") " pod="kube-system/kube-proxy-d2qrr"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.893568    3045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ba3c737-544f-4e4d-9de2-9f0180e87605-lib-modules\") pod \"kube-proxy-d2qrr\" (UID: \"2ba3c737-544f-4e4d-9de2-9f0180e87605\") " pod="kube-system/kube-proxy-d2qrr"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.901288    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-502641\" already exists" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.901749    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-502641\" already exists" pod="kube-system/etcd-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.924828    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-502641\" already exists" pod="kube-system/kube-apiserver-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.924895    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.951074    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-502641\" already exists" pod="kube-system/kube-controller-manager-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: I0127 12:28:34.951107    3045 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-502641"
	Jan 27 12:28:34 pause-502641 kubelet[3045]: E0127 12:28:34.964429    3045 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-502641\" already exists" pod="kube-system/kube-scheduler-pause-502641"
	Jan 27 12:28:41 pause-502641 kubelet[3045]: E0127 12:28:41.845540    3045 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980921841919599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:41 pause-502641 kubelet[3045]: E0127 12:28:41.845605    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980921841919599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:51 pause-502641 kubelet[3045]: E0127 12:28:51.847079    3045 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980931846514058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:28:51 pause-502641 kubelet[3045]: E0127 12:28:51.847323    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980931846514058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-502641 -n pause-502641
helpers_test.go:261: (dbg) Run:  kubectl --context pause-502641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (51.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-488586 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-488586 create -f testdata/busybox.yaml: exit status 1 (46.222975ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-488586" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-488586 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 6 (215.130611ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:31:40.854517 1774257 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-488586" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488586" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 6 (215.845059ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:31:41.069565 1774287 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-488586" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488586" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-488586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-488586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.370607044s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-488586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-488586 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-488586 describe deploy/metrics-server -n kube-system: exit status 1 (64.966512ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-488586" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-488586 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 6 (245.322936ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:33:26.748905 1775423 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-488586" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488586" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1606.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-472479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-472479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m44.108068169s)

                                                
                                                
-- stdout --
	* [no-preload-472479] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-472479" primary control-plane node in "no-preload-472479" cluster
	* Restarting existing kvm2 VM for "no-preload-472479" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-472479 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:31:55.897362 1774638 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:55.897869 1774638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:55.897884 1774638 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:55.897891 1774638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:55.898349 1774638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:31:55.899274 1774638 out.go:352] Setting JSON to false
	I0127 12:31:55.900282 1774638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33257,"bootTime":1737947859,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:31:55.900398 1774638 start.go:139] virtualization: kvm guest
	I0127 12:31:55.902381 1774638 out.go:177] * [no-preload-472479] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:31:55.903588 1774638 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:31:55.903593 1774638 notify.go:220] Checking for updates...
	I0127 12:31:55.904758 1774638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:31:55.905824 1774638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:31:55.906986 1774638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:31:55.908122 1774638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:31:55.909163 1774638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:31:55.910717 1774638 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:31:55.911232 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:55.911284 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:55.930002 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0127 12:31:55.930448 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:55.931136 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:31:55.931166 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:55.931564 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:55.931796 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:31:55.932054 1774638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:31:55.932370 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:55.932406 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:55.947288 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0127 12:31:55.947726 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:55.948196 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:31:55.948216 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:55.948558 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:55.948752 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:31:55.984755 1774638 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:31:55.985617 1774638 start.go:297] selected driver: kvm2
	I0127 12:31:55.985635 1774638 start.go:901] validating driver "kvm2" against &{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.27 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:55.985776 1774638 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:31:55.986658 1774638 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:55.986728 1774638 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:31:56.002179 1774638 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:31:56.002563 1774638 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:31:56.002598 1774638 cni.go:84] Creating CNI manager for ""
	I0127 12:31:56.002642 1774638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:31:56.002677 1774638 start.go:340] cluster config:
	{Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.27 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:56.002844 1774638 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.004246 1774638 out.go:177] * Starting "no-preload-472479" primary control-plane node in "no-preload-472479" cluster
	I0127 12:31:56.005330 1774638 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:31:56.005458 1774638 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json ...
	I0127 12:31:56.005544 1774638 cache.go:107] acquiring lock: {Name:mk639f71a3608ebd880c09c6f4eb9a539098cf11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005590 1774638 cache.go:107] acquiring lock: {Name:mk7d3e8c31e3028ac530b433216d6548161f2b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005610 1774638 cache.go:107] acquiring lock: {Name:mk0ad24c2418ae07d65df52baee7ca3e4777ce5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005634 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 12:31:56.005668 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 12:31:56.005680 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 12:31:56.005643 1774638 cache.go:107] acquiring lock: {Name:mk7e91ce66d7bc99a7dd43c311bf67c378549dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005684 1774638 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 95.362µs
	I0127 12:31:56.005689 1774638 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 105.66µs
	I0127 12:31:56.005698 1774638 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 12:31:56.005700 1774638 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 12:31:56.005542 1774638 cache.go:107] acquiring lock: {Name:mk9b4a8e0176725482a193dc85ee9e3de8f76e70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005699 1774638 start.go:360] acquireMachinesLock for no-preload-472479: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:31:56.005730 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 12:31:56.005542 1774638 cache.go:107] acquiring lock: {Name:mked314d62a39ef0534a0d0db17e6c54c2b2c2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005742 1774638 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 211.871µs
	I0127 12:31:56.005756 1774638 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 12:31:56.005726 1774638 cache.go:107] acquiring lock: {Name:mkb25515b3b95c5192227a9f8b73580df8690d67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005711 1774638 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 147.843µs
	I0127 12:31:56.005804 1774638 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 12:31:56.005878 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 12:31:56.005892 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 12:31:56.005882 1774638 cache.go:107] acquiring lock: {Name:mkda62c534daf9b50eef3a3b72d1af9f7ff250f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:56.005905 1774638 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 311.54µs
	I0127 12:31:56.005904 1774638 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 221.613µs
	I0127 12:31:56.005926 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 12:31:56.005957 1774638 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 422.896µs
	I0127 12:31:56.005967 1774638 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 12:31:56.005937 1774638 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 12:31:56.005915 1774638 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 12:31:56.006007 1774638 cache.go:115] /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 12:31:56.006023 1774638 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 183.745µs
	I0127 12:31:56.006040 1774638 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 12:31:56.006054 1774638 cache.go:87] Successfully saved all images to host disk.
	I0127 12:32:01.234633 1774638 start.go:364] duration metric: took 5.228879484s to acquireMachinesLock for "no-preload-472479"
	I0127 12:32:01.234698 1774638 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:32:01.234711 1774638 fix.go:54] fixHost starting: 
	I0127 12:32:01.235118 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:32:01.235176 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:32:01.252013 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0127 12:32:01.252400 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:32:01.252904 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:32:01.252925 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:32:01.253286 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:32:01.253505 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:01.253651 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:32:01.255186 1774638 fix.go:112] recreateIfNeeded on no-preload-472479: state=Stopped err=<nil>
	I0127 12:32:01.255212 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	W0127 12:32:01.255386 1774638 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:32:01.257110 1774638 out.go:177] * Restarting existing kvm2 VM for "no-preload-472479" ...
	I0127 12:32:01.258361 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Start
	I0127 12:32:01.258557 1774638 main.go:141] libmachine: (no-preload-472479) starting domain...
	I0127 12:32:01.258577 1774638 main.go:141] libmachine: (no-preload-472479) ensuring networks are active...
	I0127 12:32:01.259278 1774638 main.go:141] libmachine: (no-preload-472479) Ensuring network default is active
	I0127 12:32:01.259629 1774638 main.go:141] libmachine: (no-preload-472479) Ensuring network mk-no-preload-472479 is active
	I0127 12:32:01.260138 1774638 main.go:141] libmachine: (no-preload-472479) getting domain XML...
	I0127 12:32:01.260880 1774638 main.go:141] libmachine: (no-preload-472479) creating domain...
	I0127 12:32:02.532882 1774638 main.go:141] libmachine: (no-preload-472479) waiting for IP...
	I0127 12:32:02.533940 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:02.534507 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:02.534566 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:02.534475 1774691 retry.go:31] will retry after 187.81372ms: waiting for domain to come up
	I0127 12:32:02.723905 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:02.724394 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:02.724426 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:02.724350 1774691 retry.go:31] will retry after 330.177361ms: waiting for domain to come up
	I0127 12:32:03.056058 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:03.056570 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:03.056612 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:03.056516 1774691 retry.go:31] will retry after 446.541147ms: waiting for domain to come up
	I0127 12:32:03.505139 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:03.505742 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:03.505770 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:03.505699 1774691 retry.go:31] will retry after 590.902944ms: waiting for domain to come up
	I0127 12:32:04.098790 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:04.099396 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:04.099423 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:04.099355 1774691 retry.go:31] will retry after 495.502331ms: waiting for domain to come up
	I0127 12:32:04.597166 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:04.597737 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:04.597772 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:04.597696 1774691 retry.go:31] will retry after 741.745628ms: waiting for domain to come up
	I0127 12:32:05.340921 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:05.341479 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:05.341502 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:05.341429 1774691 retry.go:31] will retry after 809.5552ms: waiting for domain to come up
	I0127 12:32:06.152441 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:06.153040 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:06.153071 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:06.153000 1774691 retry.go:31] will retry after 1.413500763s: waiting for domain to come up
	I0127 12:32:07.567933 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:07.568438 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:07.568471 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:07.568396 1774691 retry.go:31] will retry after 1.403670165s: waiting for domain to come up
	I0127 12:32:08.974051 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:08.974652 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:08.974678 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:08.974621 1774691 retry.go:31] will retry after 1.788050703s: waiting for domain to come up
	I0127 12:32:10.764345 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:10.765023 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:10.765061 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:10.764966 1774691 retry.go:31] will retry after 2.75697667s: waiting for domain to come up
	I0127 12:32:13.524102 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:13.524664 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:13.524696 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:13.524614 1774691 retry.go:31] will retry after 3.220328834s: waiting for domain to come up
	I0127 12:32:16.746205 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:16.746680 1774638 main.go:141] libmachine: (no-preload-472479) DBG | unable to find current IP address of domain no-preload-472479 in network mk-no-preload-472479
	I0127 12:32:16.746708 1774638 main.go:141] libmachine: (no-preload-472479) DBG | I0127 12:32:16.746642 1774691 retry.go:31] will retry after 3.003645263s: waiting for domain to come up
	I0127 12:32:19.753702 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.754230 1774638 main.go:141] libmachine: (no-preload-472479) found domain IP: 192.168.50.27
	I0127 12:32:19.754275 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has current primary IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.754289 1774638 main.go:141] libmachine: (no-preload-472479) reserving static IP address...
	I0127 12:32:19.754733 1774638 main.go:141] libmachine: (no-preload-472479) reserved static IP address 192.168.50.27 for domain no-preload-472479
	I0127 12:32:19.754782 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "no-preload-472479", mac: "52:54:00:07:02:ae", ip: "192.168.50.27"} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:19.754792 1774638 main.go:141] libmachine: (no-preload-472479) waiting for SSH...
	I0127 12:32:19.754832 1774638 main.go:141] libmachine: (no-preload-472479) DBG | skip adding static IP to network mk-no-preload-472479 - found existing host DHCP lease matching {name: "no-preload-472479", mac: "52:54:00:07:02:ae", ip: "192.168.50.27"}
	I0127 12:32:19.754857 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Getting to WaitForSSH function...
	I0127 12:32:19.757021 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.757387 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:19.757413 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.757589 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Using SSH client type: external
	I0127 12:32:19.757627 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa (-rw-------)
	I0127 12:32:19.757658 1774638 main.go:141] libmachine: (no-preload-472479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:32:19.757684 1774638 main.go:141] libmachine: (no-preload-472479) DBG | About to run SSH command:
	I0127 12:32:19.757700 1774638 main.go:141] libmachine: (no-preload-472479) DBG | exit 0
	I0127 12:32:19.882129 1774638 main.go:141] libmachine: (no-preload-472479) DBG | SSH cmd err, output: <nil>: 
	I0127 12:32:19.882543 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetConfigRaw
	I0127 12:32:19.883233 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetIP
	I0127 12:32:19.885390 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.885681 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:19.885718 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.885988 1774638 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/config.json ...
	I0127 12:32:19.886160 1774638 machine.go:93] provisionDockerMachine start ...
	I0127 12:32:19.886177 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:19.886375 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:19.888522 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.888835 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:19.888856 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.888995 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:19.889131 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:19.889253 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:19.889324 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:19.889435 1774638 main.go:141] libmachine: Using SSH client type: native
	I0127 12:32:19.889623 1774638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.27 22 <nil> <nil>}
	I0127 12:32:19.889636 1774638 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:32:19.990644 1774638 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:32:19.990679 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetMachineName
	I0127 12:32:19.990947 1774638 buildroot.go:166] provisioning hostname "no-preload-472479"
	I0127 12:32:19.990977 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetMachineName
	I0127 12:32:19.991178 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:19.993976 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.994354 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:19.994384 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:19.994493 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:19.994681 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:19.994880 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:19.995044 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:19.995231 1774638 main.go:141] libmachine: Using SSH client type: native
	I0127 12:32:19.995418 1774638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.27 22 <nil> <nil>}
	I0127 12:32:19.995431 1774638 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472479 && echo "no-preload-472479" | sudo tee /etc/hostname
	I0127 12:32:20.107897 1774638 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472479
	
	I0127 12:32:20.107947 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.111376 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.111804 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.111837 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.112090 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.112305 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.112468 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.112635 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.112851 1774638 main.go:141] libmachine: Using SSH client type: native
	I0127 12:32:20.113058 1774638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.27 22 <nil> <nil>}
	I0127 12:32:20.113075 1774638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:32:20.223267 1774638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:32:20.223306 1774638 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:32:20.223345 1774638 buildroot.go:174] setting up certificates
	I0127 12:32:20.223357 1774638 provision.go:84] configureAuth start
	I0127 12:32:20.223371 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetMachineName
	I0127 12:32:20.223679 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetIP
	I0127 12:32:20.226634 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.227098 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.227135 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.227267 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.229941 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.230335 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.230373 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.230529 1774638 provision.go:143] copyHostCerts
	I0127 12:32:20.230583 1774638 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:32:20.230603 1774638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:32:20.230655 1774638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:32:20.230756 1774638 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:32:20.230766 1774638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:32:20.230795 1774638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:32:20.230854 1774638 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:32:20.230862 1774638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:32:20.230882 1774638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:32:20.230981 1774638 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.no-preload-472479 san=[127.0.0.1 192.168.50.27 localhost minikube no-preload-472479]
	I0127 12:32:20.290410 1774638 provision.go:177] copyRemoteCerts
	I0127 12:32:20.290485 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:32:20.290515 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.293498 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.293882 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.293904 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.294145 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.294401 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.294601 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.294801 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:32:20.380761 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:32:20.404441 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:32:20.426139 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:32:20.447407 1774638 provision.go:87] duration metric: took 224.035392ms to configureAuth
	I0127 12:32:20.447441 1774638 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:32:20.447635 1774638 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:32:20.447712 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.450388 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.450864 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.450893 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.451097 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.451314 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.451479 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.451647 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.451829 1774638 main.go:141] libmachine: Using SSH client type: native
	I0127 12:32:20.452026 1774638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.27 22 <nil> <nil>}
	I0127 12:32:20.452046 1774638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:32:20.655739 1774638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:32:20.655785 1774638 machine.go:96] duration metric: took 769.609667ms to provisionDockerMachine
	I0127 12:32:20.655812 1774638 start.go:293] postStartSetup for "no-preload-472479" (driver="kvm2")
	I0127 12:32:20.655826 1774638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:32:20.655847 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:20.656214 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:32:20.656251 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.658909 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.659349 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.659380 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.659557 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.659758 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.659916 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.660057 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:32:20.740735 1774638 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:32:20.744713 1774638 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:32:20.744740 1774638 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:32:20.744817 1774638 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:32:20.744921 1774638 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:32:20.745037 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:32:20.753961 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:32:20.776666 1774638 start.go:296] duration metric: took 120.83815ms for postStartSetup
	I0127 12:32:20.776711 1774638 fix.go:56] duration metric: took 19.54200178s for fixHost
	I0127 12:32:20.776734 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.780109 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.780557 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.780586 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.780798 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.781012 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.781156 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.781331 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.781519 1774638 main.go:141] libmachine: Using SSH client type: native
	I0127 12:32:20.781684 1774638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.27 22 <nil> <nil>}
	I0127 12:32:20.781695 1774638 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:32:20.886781 1774638 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981140.860853009
	
	I0127 12:32:20.886808 1774638 fix.go:216] guest clock: 1737981140.860853009
	I0127 12:32:20.886817 1774638 fix.go:229] Guest: 2025-01-27 12:32:20.860853009 +0000 UTC Remote: 2025-01-27 12:32:20.776714893 +0000 UTC m=+24.919010846 (delta=84.138116ms)
	I0127 12:32:20.886839 1774638 fix.go:200] guest clock delta is within tolerance: 84.138116ms
	I0127 12:32:20.886843 1774638 start.go:83] releasing machines lock for "no-preload-472479", held for 19.65217966s
	I0127 12:32:20.886861 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:20.887132 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetIP
	I0127 12:32:20.889867 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.890303 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.890341 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.890490 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:20.890954 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:20.891141 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:32:20.891233 1774638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:32:20.891262 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.891343 1774638 ssh_runner.go:195] Run: cat /version.json
	I0127 12:32:20.891376 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:32:20.894155 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.894184 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.894519 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.894557 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.894587 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:20.894609 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:20.894785 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.894806 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:32:20.895000 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.895006 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:32:20.895206 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.895219 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:32:20.895377 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:32:20.895381 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:32:21.002266 1774638 ssh_runner.go:195] Run: systemctl --version
	I0127 12:32:21.008442 1774638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:32:21.152737 1774638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:32:21.159636 1774638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:32:21.159729 1774638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:32:21.177084 1774638 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:32:21.177112 1774638 start.go:495] detecting cgroup driver to use...
	I0127 12:32:21.177195 1774638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:32:21.192621 1774638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:32:21.206209 1774638 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:32:21.206275 1774638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:32:21.218542 1774638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:32:21.231134 1774638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:32:21.353476 1774638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:32:21.494001 1774638 docker.go:233] disabling docker service ...
	I0127 12:32:21.494078 1774638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:32:21.509899 1774638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:32:21.522538 1774638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:32:21.655945 1774638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:32:21.778872 1774638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:32:21.793129 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:32:21.810606 1774638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:32:21.810682 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.820352 1774638 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:32:21.820418 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.830298 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.840207 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.854520 1774638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:32:21.868036 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.878908 1774638 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.895486 1774638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:32:21.905549 1774638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:32:21.914871 1774638 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:32:21.914933 1774638 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:32:21.926938 1774638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:32:21.936055 1774638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:32:22.080914 1774638 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:32:22.168161 1774638 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:32:22.168278 1774638 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:32:22.172934 1774638 start.go:563] Will wait 60s for crictl version
	I0127 12:32:22.172992 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.176709 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:32:22.221261 1774638 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:32:22.221382 1774638 ssh_runner.go:195] Run: crio --version
	I0127 12:32:22.252394 1774638 ssh_runner.go:195] Run: crio --version
	I0127 12:32:22.282286 1774638 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:32:22.283567 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetIP
	I0127 12:32:22.286969 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:22.287463 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:32:22.287496 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:32:22.287698 1774638 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 12:32:22.293009 1774638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:32:22.306150 1774638 kubeadm.go:883] updating cluster {Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.27 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:32:22.306306 1774638 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:32:22.306369 1774638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:32:22.342633 1774638 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:32:22.342667 1774638 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 12:32:22.342774 1774638 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.342796 1774638 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.342805 1774638 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.342768 1774638 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:22.342811 1774638 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.342853 1774638 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.342773 1774638 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.342861 1774638 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 12:32:22.344981 1774638 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 12:32:22.345016 1774638 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.345021 1774638 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.345025 1774638 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:22.345018 1774638 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.345020 1774638 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.345079 1774638 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.345079 1774638 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.560053 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0127 12:32:22.573781 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.583623 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.595255 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.603402 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.605938 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.606788 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.708708 1774638 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0127 12:32:22.708753 1774638 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0127 12:32:22.708768 1774638 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.708791 1774638 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.708837 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.708845 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.739432 1774638 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0127 12:32:22.739476 1774638 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0127 12:32:22.739488 1774638 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.739515 1774638 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.739516 1774638 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0127 12:32:22.739544 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.739554 1774638 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.739558 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.739589 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.739592 1774638 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0127 12:32:22.739633 1774638 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.739660 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:22.739676 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.739682 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.750461 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.796606 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.796664 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.796606 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.796784 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.799195 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.827738 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:22.929484 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:22.929593 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:32:22.929611 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:22.929631 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:22.929603 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:32:22.929661 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:32:23.040402 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 12:32:23.040442 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:32:23.040491 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 12:32:23.049798 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 12:32:23.049885 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 12:32:23.063035 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:32:23.064836 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 12:32:23.064898 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 12:32:23.064905 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0127 12:32:23.064924 1774638 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 12:32:23.064972 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 12:32:23.066682 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 12:32:23.112275 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0127 12:32:23.112475 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 12:32:23.112590 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 12:32:23.145761 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 12:32:23.145879 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0127 12:32:23.540721 1774638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:24.974837 1774638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.909907617s)
	I0127 12:32:24.974890 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0127 12:32:24.974848 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (1.909847684s)
	I0127 12:32:24.974900 1774638 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0: (1.908199394s)
	I0127 12:32:24.974935 1774638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.862330446s)
	I0127 12:32:24.974954 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0127 12:32:24.974959 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 12:32:24.975012 1774638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.829111969s)
	I0127 12:32:24.974908 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0127 12:32:24.975040 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0127 12:32:24.975049 1774638 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0127 12:32:24.975059 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0127 12:32:24.975098 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0127 12:32:24.975106 1774638 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.434341378s)
	I0127 12:32:24.975143 1774638 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0127 12:32:24.975180 1774638 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:24.975218 1774638 ssh_runner.go:195] Run: which crictl
	I0127 12:32:24.982270 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0127 12:32:24.982316 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:26.945479 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.970348289s)
	I0127 12:32:26.945523 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0127 12:32:26.945538 1774638 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 12:32:26.945542 1774638 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.963197583s)
	I0127 12:32:26.945594 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 12:32:26.945618 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:28.931767 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.986140807s)
	I0127 12:32:28.931800 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0127 12:32:28.931811 1774638 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 12:32:28.931842 1774638 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.986202877s)
	I0127 12:32:28.931893 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 12:32:28.931911 1774638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:32:31.312577 1774638 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.380641473s)
	I0127 12:32:31.312644 1774638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 12:32:31.312749 1774638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 12:32:31.312747 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (2.380827483s)
	I0127 12:32:31.312819 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0127 12:32:31.312832 1774638 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 12:32:31.312865 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 12:32:33.563851 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (2.250949912s)
	I0127 12:32:33.563904 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0127 12:32:33.563919 1774638 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0127 12:32:33.563931 1774638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.251138595s)
	I0127 12:32:33.564023 1774638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0127 12:32:33.563974 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0127 12:32:37.277568 1774638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.713505211s)
	I0127 12:32:37.277605 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0127 12:32:37.277636 1774638 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 12:32:37.277688 1774638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 12:32:38.228807 1774638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 12:32:38.228864 1774638 cache_images.go:123] Successfully loaded all cached images
	I0127 12:32:38.228873 1774638 cache_images.go:92] duration metric: took 15.886193048s to LoadCachedImages
	I0127 12:32:38.228890 1774638 kubeadm.go:934] updating node { 192.168.50.27 8443 v1.32.1 crio true true} ...
	I0127 12:32:38.229020 1774638 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:32:38.229108 1774638 ssh_runner.go:195] Run: crio config
	I0127 12:32:38.271909 1774638 cni.go:84] Creating CNI manager for ""
	I0127 12:32:38.271937 1774638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:32:38.271950 1774638 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:32:38.271978 1774638 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.27 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472479 NodeName:no-preload-472479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:32:38.272159 1774638 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472479"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.27"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.27"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:32:38.272252 1774638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:32:38.281702 1774638 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:32:38.281769 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:32:38.290704 1774638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 12:32:38.306077 1774638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:32:38.323563 1774638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0127 12:32:38.341700 1774638 ssh_runner.go:195] Run: grep 192.168.50.27	control-plane.minikube.internal$ /etc/hosts
	I0127 12:32:38.345553 1774638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:32:38.357460 1774638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:32:38.485947 1774638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:32:38.502555 1774638 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479 for IP: 192.168.50.27
	I0127 12:32:38.502576 1774638 certs.go:194] generating shared ca certs ...
	I0127 12:32:38.502593 1774638 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:32:38.502768 1774638 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:32:38.502806 1774638 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:32:38.502816 1774638 certs.go:256] generating profile certs ...
	I0127 12:32:38.502937 1774638 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/client.key
	I0127 12:32:38.502990 1774638 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/apiserver.key.daefb834
	I0127 12:32:38.503026 1774638 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/proxy-client.key
	I0127 12:32:38.503125 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:32:38.503156 1774638 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:32:38.503166 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:32:38.503188 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:32:38.503215 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:32:38.503237 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:32:38.503274 1774638 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:32:38.503877 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:32:38.540682 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:32:38.566238 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:32:38.595797 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:32:38.624298 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:32:38.658786 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:32:38.682322 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:32:38.704284 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/no-preload-472479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:32:38.725963 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:32:38.747175 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:32:38.767760 1774638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:32:38.788358 1774638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:32:38.803288 1774638 ssh_runner.go:195] Run: openssl version
	I0127 12:32:38.808775 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:32:38.820180 1774638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:32:38.824694 1774638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:32:38.824738 1774638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:32:38.830288 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:32:38.841437 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:32:38.851880 1774638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:32:38.856195 1774638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:32:38.856250 1774638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:32:38.861740 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:32:38.872484 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:32:38.882272 1774638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:32:38.886240 1774638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:32:38.886296 1774638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:32:38.891487 1774638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:32:38.901805 1774638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:32:38.905809 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:32:38.912093 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:32:38.917661 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:32:38.922938 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:32:38.928054 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:32:38.933186 1774638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:32:38.938474 1774638 kubeadm.go:392] StartCluster: {Name:no-preload-472479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-472479 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.27 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:32:38.938587 1774638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:32:38.938665 1774638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:32:38.973696 1774638 cri.go:89] found id: ""
	I0127 12:32:38.973766 1774638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:32:38.983644 1774638 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:32:38.983671 1774638 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:32:38.983721 1774638 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:32:38.993039 1774638 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:32:38.993978 1774638 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-472479" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:32:38.994709 1774638 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-1724227/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-472479" cluster setting kubeconfig missing "no-preload-472479" context setting]
	I0127 12:32:38.995488 1774638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:32:38.997102 1774638 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:32:39.012863 1774638 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.27
	I0127 12:32:39.012906 1774638 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:32:39.012924 1774638 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 12:32:39.012992 1774638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:32:39.046937 1774638 cri.go:89] found id: ""
	I0127 12:32:39.047012 1774638 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:32:39.064465 1774638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:32:39.074147 1774638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:32:39.074169 1774638 kubeadm.go:157] found existing configuration files:
	
	I0127 12:32:39.074223 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:32:39.084640 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:32:39.084717 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:32:39.093823 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:32:39.103065 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:32:39.103110 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:32:39.114336 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:32:39.125628 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:32:39.125695 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:32:39.134688 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:32:39.142868 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:32:39.142918 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:32:39.152377 1774638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:32:39.161686 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:39.265459 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:40.180103 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:40.384716 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:40.454557 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:40.562184 1774638 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:32:40.562285 1774638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:32:41.062594 1774638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:32:41.562487 1774638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:32:41.581621 1774638 api_server.go:72] duration metric: took 1.019440986s to wait for apiserver process to appear ...
	I0127 12:32:41.581648 1774638 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:32:41.581670 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:32:43.683586 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:32:43.683635 1774638 api_server.go:103] status: https://192.168.50.27:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:32:43.683656 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:32:43.743534 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:32:43.743565 1774638 api_server.go:103] status: https://192.168.50.27:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:32:44.082019 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:32:44.088227 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:32:44.088263 1774638 api_server.go:103] status: https://192.168.50.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:32:44.581884 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:32:44.589797 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:32:44.589823 1774638 api_server.go:103] status: https://192.168.50.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:32:45.082470 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:32:45.088922 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 200:
	ok
	I0127 12:32:45.097451 1774638 api_server.go:141] control plane version: v1.32.1
	I0127 12:32:45.097484 1774638 api_server.go:131] duration metric: took 3.515827114s to wait for apiserver health ...
	I0127 12:32:45.097496 1774638 cni.go:84] Creating CNI manager for ""
	I0127 12:32:45.097505 1774638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:32:45.099200 1774638 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:32:45.100420 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:32:45.119921 1774638 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:32:45.142711 1774638 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:32:45.153820 1774638 system_pods.go:59] 8 kube-system pods found
	I0127 12:32:45.153855 1774638 system_pods.go:61] "coredns-668d6bf9bc-jcq7r" [d260eae6-3154-4893-9729-76bda89db653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:32:45.153862 1774638 system_pods.go:61] "etcd-no-preload-472479" [25463721-4ee7-467e-b6ba-bc79a047cece] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:32:45.153875 1774638 system_pods.go:61] "kube-apiserver-no-preload-472479" [5fb94312-b6d7-4f6e-b8a3-54e5fcbb9e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:32:45.153884 1774638 system_pods.go:61] "kube-controller-manager-no-preload-472479" [16485558-ca1b-4dbc-a7bd-35cf6328fab1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:32:45.153894 1774638 system_pods.go:61] "kube-proxy-r42jb" [c8f5be34-fd49-428d-a970-f1f5fd82ae68] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:32:45.153910 1774638 system_pods.go:61] "kube-scheduler-no-preload-472479" [779ffa00-5461-43b1-820f-c21b7784a524] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:32:45.153925 1774638 system_pods.go:61] "metrics-server-f79f97bbb-m278f" [f1dd2b6d-adba-4fca-81a0-b3f4f1d07530] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:32:45.153935 1774638 system_pods.go:61] "storage-provisioner" [b0e6d78d-134a-4107-a839-36230d2c1448] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:32:45.153945 1774638 system_pods.go:74] duration metric: took 11.212417ms to wait for pod list to return data ...
	I0127 12:32:45.153956 1774638 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:32:45.159315 1774638 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:32:45.159392 1774638 node_conditions.go:123] node cpu capacity is 2
	I0127 12:32:45.159423 1774638 node_conditions.go:105] duration metric: took 5.461932ms to run NodePressure ...
	I0127 12:32:45.159466 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:32:45.464607 1774638 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:32:45.470665 1774638 kubeadm.go:739] kubelet initialised
	I0127 12:32:45.470688 1774638 kubeadm.go:740] duration metric: took 6.046568ms waiting for restarted kubelet to initialise ...
	I0127 12:32:45.470696 1774638 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:32:45.475305 1774638 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-jcq7r" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.481032 1774638 pod_ready.go:98] node "no-preload-472479" hosting pod "coredns-668d6bf9bc-jcq7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.481063 1774638 pod_ready.go:82] duration metric: took 5.728663ms for pod "coredns-668d6bf9bc-jcq7r" in "kube-system" namespace to be "Ready" ...
	E0127 12:32:45.481077 1774638 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-472479" hosting pod "coredns-668d6bf9bc-jcq7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.481088 1774638 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.486158 1774638 pod_ready.go:98] node "no-preload-472479" hosting pod "etcd-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.486182 1774638 pod_ready.go:82] duration metric: took 5.081664ms for pod "etcd-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	E0127 12:32:45.486192 1774638 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-472479" hosting pod "etcd-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.486200 1774638 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.490797 1774638 pod_ready.go:98] node "no-preload-472479" hosting pod "kube-apiserver-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.490827 1774638 pod_ready.go:82] duration metric: took 4.618363ms for pod "kube-apiserver-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	E0127 12:32:45.490848 1774638 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-472479" hosting pod "kube-apiserver-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.490863 1774638 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.555369 1774638 pod_ready.go:98] node "no-preload-472479" hosting pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.555403 1774638 pod_ready.go:82] duration metric: took 64.526585ms for pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	E0127 12:32:45.555417 1774638 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-472479" hosting pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-472479" has status "Ready":"False"
	I0127 12:32:45.555427 1774638 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-r42jb" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.946196 1774638 pod_ready.go:93] pod "kube-proxy-r42jb" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:45.946219 1774638 pod_ready.go:82] duration metric: took 390.779807ms for pod "kube-proxy-r42jb" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:45.946231 1774638 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:47.952096 1774638 pod_ready.go:103] pod "kube-scheduler-no-preload-472479" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:49.953586 1774638 pod_ready.go:103] pod "kube-scheduler-no-preload-472479" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:52.452312 1774638 pod_ready.go:103] pod "kube-scheduler-no-preload-472479" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:53.953088 1774638 pod_ready.go:93] pod "kube-scheduler-no-preload-472479" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:53.953116 1774638 pod_ready.go:82] duration metric: took 8.006875002s for pod "kube-scheduler-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:53.953130 1774638 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:55.960570 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:57.961319 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:00.459716 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:02.459796 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:04.960065 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:07.460622 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:09.959824 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:11.959868 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:14.459306 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:16.459459 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:18.959660 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:21.460000 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:23.958782 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:25.959521 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:28.459001 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:30.459422 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:32.959736 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:35.459974 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:37.460505 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:39.960230 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:42.458628 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:44.458928 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:46.459973 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:48.959561 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:51.460328 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:53.462079 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:55.960447 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:57.960523 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:00.459737 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:02.960733 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:05.459622 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:07.460156 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:09.460198 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:11.462126 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:13.962520 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:16.459580 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:18.958225 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:20.958687 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:22.959363 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:25.458830 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:27.459621 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:29.962682 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:32.458533 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:34.959887 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:37.459861 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:39.960031 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:41.962727 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:44.459783 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:46.459938 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:48.959128 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:51.459327 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:53.958706 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:55.964532 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:58.458948 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:00.959214 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:02.959705 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:05.458791 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:07.460682 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:09.960083 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:12.459383 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:14.958829 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:16.959636 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:19.459872 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:21.459931 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:23.958581 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:25.959846 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:28.459440 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:30.460738 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:32.958313 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:34.959414 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:37.460188 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:39.964085 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.460586 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.959895 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:47.460115 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:49.959711 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:51.960978 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:54.459458 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:56.459566 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:58.959331 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:00.960564 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:03.458801 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:05.459286 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.958874 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:09.959856 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:12.459839 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:14.958871 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:16.959727 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:18.961671 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:21.459217 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:23.460100 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:25.959667 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.960855 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:30.458770 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:32.460493 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.460713 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:36.959381 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:39.461036 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:41.461619 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:43.962057 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:46.460629 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:48.960362 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:51.459140 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:53.459955 1774638 pod_ready.go:103] pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:53.954181 1774638 pod_ready.go:82] duration metric: took 4m0.001032309s for pod "metrics-server-f79f97bbb-m278f" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:53.954230 1774638 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 12:36:53.954250 1774638 pod_ready.go:39] duration metric: took 4m8.483544321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:53.954279 1774638 kubeadm.go:597] duration metric: took 4m14.970602373s to restartPrimaryControlPlane
	W0127 12:36:53.954341 1774638 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:36:53.954369 1774638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:37:21.725640 1774638 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.771243477s)
	I0127 12:37:21.725725 1774638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:37:21.749266 1774638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:37:21.769503 1774638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:37:21.784574 1774638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:37:21.784605 1774638 kubeadm.go:157] found existing configuration files:
	
	I0127 12:37:21.784730 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:37:21.797514 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:37:21.797585 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:37:21.808708 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:37:21.819556 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:37:21.819643 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:37:21.837667 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:37:21.852125 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:37:21.852206 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:37:21.877547 1774638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:37:21.888352 1774638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:37:21.888432 1774638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:37:21.897458 1774638 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:37:21.949203 1774638 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:37:21.949460 1774638 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:37:22.072170 1774638 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:37:22.072360 1774638 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:37:22.072508 1774638 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:37:22.082436 1774638 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:37:22.084121 1774638 out.go:235]   - Generating certificates and keys ...
	I0127 12:37:22.084231 1774638 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:37:22.084325 1774638 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:37:22.084464 1774638 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:37:22.084567 1774638 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:37:22.084669 1774638 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:37:22.084747 1774638 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:37:22.084830 1774638 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:37:22.084920 1774638 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:37:22.085018 1774638 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:37:22.085116 1774638 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:37:22.085172 1774638 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:37:22.085253 1774638 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:37:22.559542 1774638 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:37:22.668595 1774638 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:37:22.861897 1774638 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:37:23.010249 1774638 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:37:23.131836 1774638 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:37:23.132478 1774638 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:37:23.135902 1774638 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:37:23.137476 1774638 out.go:235]   - Booting up control plane ...
	I0127 12:37:23.137626 1774638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:37:23.137755 1774638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:37:23.138595 1774638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:37:23.161370 1774638 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:37:23.176399 1774638 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:37:23.176469 1774638 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:37:23.312577 1774638 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:37:23.312772 1774638 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:37:23.813604 1774638 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.335706ms
	I0127 12:37:23.813696 1774638 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:37:28.818698 1774638 kubeadm.go:310] [api-check] The API server is healthy after 5.003263921s
	I0127 12:37:28.837689 1774638 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:37:28.869321 1774638 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:37:28.907248 1774638 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:37:28.907539 1774638 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-472479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:37:28.921959 1774638 kubeadm.go:310] [bootstrap-token] Using token: ns5ar2.e78q2c01g6wah1lj
	I0127 12:37:28.923188 1774638 out.go:235]   - Configuring RBAC rules ...
	I0127 12:37:28.923308 1774638 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:37:28.934479 1774638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:37:28.945121 1774638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:37:28.948911 1774638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:37:28.957605 1774638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:37:28.962471 1774638 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:37:29.232402 1774638 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:37:29.659560 1774638 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:37:30.235258 1774638 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:37:30.236471 1774638 kubeadm.go:310] 
	I0127 12:37:30.236569 1774638 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:37:30.236582 1774638 kubeadm.go:310] 
	I0127 12:37:30.236681 1774638 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:37:30.236694 1774638 kubeadm.go:310] 
	I0127 12:37:30.236728 1774638 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:37:30.236806 1774638 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:37:30.236879 1774638 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:37:30.236895 1774638 kubeadm.go:310] 
	I0127 12:37:30.236977 1774638 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:37:30.236986 1774638 kubeadm.go:310] 
	I0127 12:37:30.237052 1774638 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:37:30.237062 1774638 kubeadm.go:310] 
	I0127 12:37:30.237136 1774638 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:37:30.237235 1774638 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:37:30.237328 1774638 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:37:30.237335 1774638 kubeadm.go:310] 
	I0127 12:37:30.237447 1774638 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:37:30.237543 1774638 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:37:30.237549 1774638 kubeadm.go:310] 
	I0127 12:37:30.237653 1774638 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ns5ar2.e78q2c01g6wah1lj \
	I0127 12:37:30.237787 1774638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:37:30.237817 1774638 kubeadm.go:310] 	--control-plane 
	I0127 12:37:30.237823 1774638 kubeadm.go:310] 
	I0127 12:37:30.237895 1774638 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:37:30.237899 1774638 kubeadm.go:310] 
	I0127 12:37:30.237965 1774638 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ns5ar2.e78q2c01g6wah1lj \
	I0127 12:37:30.238057 1774638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:37:30.239651 1774638 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:37:30.239748 1774638 cni.go:84] Creating CNI manager for ""
	I0127 12:37:30.239773 1774638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:37:30.337880 1774638 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:37:30.443025 1774638 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:37:30.458509 1774638 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:37:30.478819 1774638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:37:30.478874 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:30.478925 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472479 minikube.k8s.io/updated_at=2025_01_27T12_37_30_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=no-preload-472479 minikube.k8s.io/primary=true
	I0127 12:37:30.980477 1774638 ops.go:34] apiserver oom_adj: -16
	I0127 12:37:30.980622 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:31.480789 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:31.981116 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:32.481225 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:32.981672 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:33.481456 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:33.980903 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:34.481303 1774638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:37:34.642219 1774638 kubeadm.go:1113] duration metric: took 4.163395907s to wait for elevateKubeSystemPrivileges
	I0127 12:37:34.642267 1774638 kubeadm.go:394] duration metric: took 4m55.703800187s to StartCluster
	I0127 12:37:34.642296 1774638 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:37:34.642409 1774638 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:37:34.643597 1774638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:37:34.643840 1774638 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.27 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:37:34.643962 1774638 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:37:34.644070 1774638 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472479"
	I0127 12:37:34.644088 1774638 addons.go:238] Setting addon storage-provisioner=true in "no-preload-472479"
	W0127 12:37:34.644097 1774638 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:37:34.644119 1774638 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:37:34.644137 1774638 host.go:66] Checking if "no-preload-472479" exists ...
	I0127 12:37:34.644134 1774638 addons.go:69] Setting default-storageclass=true in profile "no-preload-472479"
	I0127 12:37:34.644148 1774638 addons.go:69] Setting metrics-server=true in profile "no-preload-472479"
	I0127 12:37:34.644196 1774638 addons.go:238] Setting addon metrics-server=true in "no-preload-472479"
	W0127 12:37:34.644211 1774638 addons.go:247] addon metrics-server should already be in state true
	I0127 12:37:34.644237 1774638 host.go:66] Checking if "no-preload-472479" exists ...
	I0127 12:37:34.644196 1774638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472479"
	I0127 12:37:34.644138 1774638 addons.go:69] Setting dashboard=true in profile "no-preload-472479"
	I0127 12:37:34.644413 1774638 addons.go:238] Setting addon dashboard=true in "no-preload-472479"
	W0127 12:37:34.644428 1774638 addons.go:247] addon dashboard should already be in state true
	I0127 12:37:34.644460 1774638 host.go:66] Checking if "no-preload-472479" exists ...
	I0127 12:37:34.644597 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.644597 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.644631 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.644700 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.644731 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.644778 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.644809 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.644809 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.645460 1774638 out.go:177] * Verifying Kubernetes components...
	I0127 12:37:34.647015 1774638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:37:34.662936 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0127 12:37:34.662948 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0127 12:37:34.663573 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.664211 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.664232 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.664601 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.664758 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I0127 12:37:34.664938 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
	I0127 12:37:34.665178 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.665214 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.665215 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.665254 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.665271 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.665730 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.665756 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.665782 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.665798 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.665825 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.665852 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.666136 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.666171 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.666180 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.666398 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:37:34.666699 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.666772 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.666876 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.666930 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.669367 1774638 addons.go:238] Setting addon default-storageclass=true in "no-preload-472479"
	W0127 12:37:34.669386 1774638 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:37:34.669409 1774638 host.go:66] Checking if "no-preload-472479" exists ...
	I0127 12:37:34.669632 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.669663 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.685408 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I0127 12:37:34.685660 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0127 12:37:34.686107 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.686416 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0127 12:37:34.686683 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.686699 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.686829 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.686935 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.687260 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.687455 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:37:34.687643 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.687666 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.687853 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.687870 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.688143 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.688321 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:37:34.689895 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:37:34.690340 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.690635 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:37:34.691053 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:37:34.691671 1774638 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:37:34.692531 1774638 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:37:34.692838 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:37:34.693972 1774638 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:37:34.693995 1774638 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:37:34.694158 1774638 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:37:34.694175 1774638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:37:34.694195 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:37:34.695730 1774638 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:37:34.695751 1774638 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:37:34.695772 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:37:34.696386 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0127 12:37:34.696883 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.697356 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:37:34.697388 1774638 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:37:34.697411 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:37:34.697449 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.697471 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.697759 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.698225 1774638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:37:34.698274 1774638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:37:34.699796 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.700240 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:37:34.700268 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.700537 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:37:34.700854 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:37:34.701105 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:37:34.701362 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:37:34.706865 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:37:34.706875 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.706897 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.706904 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:37:34.706869 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:37:34.706922 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.706923 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:37:34.706939 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.707082 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:37:34.707169 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:37:34.707260 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:37:34.707325 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:37:34.707488 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:37:34.707503 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:37:34.719904 1774638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46545
	I0127 12:37:34.720348 1774638 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:37:34.720949 1774638 main.go:141] libmachine: Using API Version  1
	I0127 12:37:34.720979 1774638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:37:34.721346 1774638 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:37:34.721598 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetState
	I0127 12:37:34.723540 1774638 main.go:141] libmachine: (no-preload-472479) Calling .DriverName
	I0127 12:37:34.723888 1774638 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:37:34.723904 1774638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:37:34.723923 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHHostname
	I0127 12:37:34.727142 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.727633 1774638 main.go:141] libmachine: (no-preload-472479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:02:ae", ip: ""} in network mk-no-preload-472479: {Iface:virbr2 ExpiryTime:2025-01-27 13:32:12 +0000 UTC Type:0 Mac:52:54:00:07:02:ae Iaid: IPaddr:192.168.50.27 Prefix:24 Hostname:no-preload-472479 Clientid:01:52:54:00:07:02:ae}
	I0127 12:37:34.727663 1774638 main.go:141] libmachine: (no-preload-472479) DBG | domain no-preload-472479 has defined IP address 192.168.50.27 and MAC address 52:54:00:07:02:ae in network mk-no-preload-472479
	I0127 12:37:34.727936 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHPort
	I0127 12:37:34.728121 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHKeyPath
	I0127 12:37:34.728286 1774638 main.go:141] libmachine: (no-preload-472479) Calling .GetSSHUsername
	I0127 12:37:34.728427 1774638 sshutil.go:53] new ssh client: &{IP:192.168.50.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/no-preload-472479/id_rsa Username:docker}
	I0127 12:37:34.870837 1774638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:37:34.951286 1774638 node_ready.go:35] waiting up to 6m0s for node "no-preload-472479" to be "Ready" ...
	I0127 12:37:34.964766 1774638 node_ready.go:49] node "no-preload-472479" has status "Ready":"True"
	I0127 12:37:34.964794 1774638 node_ready.go:38] duration metric: took 13.473996ms for node "no-preload-472479" to be "Ready" ...
	I0127 12:37:34.964807 1774638 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:37:34.979579 1774638 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9plpt" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:35.028947 1774638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:37:35.029096 1774638 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:37:35.029113 1774638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:37:35.071822 1774638 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:37:35.071845 1774638 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:37:35.132962 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:37:35.132998 1774638 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:37:35.136012 1774638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:37:35.154283 1774638 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:37:35.154323 1774638 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:37:35.226785 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:37:35.226820 1774638 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:37:35.269775 1774638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:37:35.355013 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:37:35.355131 1774638 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:37:35.466047 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:37:35.466223 1774638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:37:35.494823 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:35.494924 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:35.495388 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Closing plugin on server side
	I0127 12:37:35.495398 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:35.495415 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:35.495425 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:35.495432 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:35.495692 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:35.495717 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:35.507730 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:35.507755 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:35.508109 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:35.508133 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:35.560029 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:37:35.560060 1774638 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:37:35.636970 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:37:35.637000 1774638 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:37:35.709492 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:37:35.709527 1774638 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:37:35.798347 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:37:35.798376 1774638 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:37:35.862818 1774638 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:37:35.862850 1774638 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:37:35.946698 1774638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:37:36.511528 1774638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.375469499s)
	I0127 12:37:36.511595 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:36.511611 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:36.511917 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:36.511970 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:36.511991 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:36.512004 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:36.512240 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:36.512349 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:36.512358 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Closing plugin on server side
	I0127 12:37:36.839967 1774638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.570077173s)
	I0127 12:37:36.840042 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:36.840059 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:36.840414 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:36.840439 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:36.840459 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:36.840470 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:36.840740 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:36.840801 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:36.840767 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Closing plugin on server side
	I0127 12:37:36.840818 1774638 addons.go:479] Verifying addon metrics-server=true in "no-preload-472479"
	I0127 12:37:36.987543 1774638 pod_ready.go:93] pod "coredns-668d6bf9bc-9plpt" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:36.987579 1774638 pod_ready.go:82] duration metric: took 2.007970465s for pod "coredns-668d6bf9bc-9plpt" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:36.987593 1774638 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cttbf" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:37.567863 1774638 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.621092741s)
	I0127 12:37:37.567924 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:37.567940 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:37.568257 1774638 main.go:141] libmachine: (no-preload-472479) DBG | Closing plugin on server side
	I0127 12:37:37.568303 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:37.568313 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:37.568330 1774638 main.go:141] libmachine: Making call to close driver server
	I0127 12:37:37.568342 1774638 main.go:141] libmachine: (no-preload-472479) Calling .Close
	I0127 12:37:37.568590 1774638 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:37:37.568607 1774638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:37:37.570250 1774638 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-472479 addons enable metrics-server
	
	I0127 12:37:37.571647 1774638 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:37:37.572958 1774638 addons.go:514] duration metric: took 2.929029414s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:37:38.994239 1774638 pod_ready.go:103] pod "coredns-668d6bf9bc-cttbf" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:40.995918 1774638 pod_ready.go:103] pod "coredns-668d6bf9bc-cttbf" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:43.044129 1774638 pod_ready.go:93] pod "coredns-668d6bf9bc-cttbf" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.044151 1774638 pod_ready.go:82] duration metric: took 6.056550853s for pod "coredns-668d6bf9bc-cttbf" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.044167 1774638 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.048230 1774638 pod_ready.go:93] pod "etcd-no-preload-472479" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.048248 1774638 pod_ready.go:82] duration metric: took 4.074741ms for pod "etcd-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.048256 1774638 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.052626 1774638 pod_ready.go:93] pod "kube-apiserver-no-preload-472479" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.052642 1774638 pod_ready.go:82] duration metric: took 4.381091ms for pod "kube-apiserver-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.052651 1774638 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.061949 1774638 pod_ready.go:93] pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.061970 1774638 pod_ready.go:82] duration metric: took 9.312962ms for pod "kube-controller-manager-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.061982 1774638 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-777hh" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.066419 1774638 pod_ready.go:93] pod "kube-proxy-777hh" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.066437 1774638 pod_ready.go:82] duration metric: took 4.44714ms for pod "kube-proxy-777hh" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.066449 1774638 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.396532 1774638 pod_ready.go:93] pod "kube-scheduler-no-preload-472479" in "kube-system" namespace has status "Ready":"True"
	I0127 12:37:43.396557 1774638 pod_ready.go:82] duration metric: took 330.100112ms for pod "kube-scheduler-no-preload-472479" in "kube-system" namespace to be "Ready" ...
	I0127 12:37:43.396568 1774638 pod_ready.go:39] duration metric: took 8.43174505s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:37:43.396590 1774638 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:37:43.396651 1774638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:43.414997 1774638 api_server.go:72] duration metric: took 8.771122432s to wait for apiserver process to appear ...
	I0127 12:37:43.415035 1774638 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:37:43.415058 1774638 api_server.go:253] Checking apiserver healthz at https://192.168.50.27:8443/healthz ...
	I0127 12:37:43.422043 1774638 api_server.go:279] https://192.168.50.27:8443/healthz returned 200:
	ok
	I0127 12:37:43.423055 1774638 api_server.go:141] control plane version: v1.32.1
	I0127 12:37:43.423080 1774638 api_server.go:131] duration metric: took 8.036513ms to wait for apiserver health ...
	I0127 12:37:43.423091 1774638 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:37:43.595838 1774638 system_pods.go:59] 9 kube-system pods found
	I0127 12:37:43.595868 1774638 system_pods.go:61] "coredns-668d6bf9bc-9plpt" [7460a186-1436-4dcb-b065-73d918b87428] Running
	I0127 12:37:43.595873 1774638 system_pods.go:61] "coredns-668d6bf9bc-cttbf" [5feb255b-3898-42bd-9419-dce1a016e154] Running
	I0127 12:37:43.595876 1774638 system_pods.go:61] "etcd-no-preload-472479" [ee3cd49f-e3ee-4ccd-8bb4-440baba82da8] Running
	I0127 12:37:43.595880 1774638 system_pods.go:61] "kube-apiserver-no-preload-472479" [8427a9f7-3b0f-4d66-bc8d-2383130ad93a] Running
	I0127 12:37:43.595884 1774638 system_pods.go:61] "kube-controller-manager-no-preload-472479" [08ec0ba0-cb11-4019-a04c-cb8b67f47c6f] Running
	I0127 12:37:43.595887 1774638 system_pods.go:61] "kube-proxy-777hh" [856e6e4e-7fab-4ba7-9236-2d98705e6431] Running
	I0127 12:37:43.595891 1774638 system_pods.go:61] "kube-scheduler-no-preload-472479" [ce895d12-b68e-4af6-a5fc-f58b823b08aa] Running
	I0127 12:37:43.595896 1774638 system_pods.go:61] "metrics-server-f79f97bbb-sh4m7" [7a889d9f-c677-4338-a846-7067b568b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:37:43.595903 1774638 system_pods.go:61] "storage-provisioner" [2e068a21-4866-4db6-a5cf-736f52620cd1] Running
	I0127 12:37:43.595910 1774638 system_pods.go:74] duration metric: took 172.812771ms to wait for pod list to return data ...
	I0127 12:37:43.595920 1774638 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:37:43.792853 1774638 default_sa.go:45] found service account: "default"
	I0127 12:37:43.792883 1774638 default_sa.go:55] duration metric: took 196.953113ms for default service account to be created ...
	I0127 12:37:43.792892 1774638 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:37:43.997018 1774638 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-472479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472479 -n no-preload-472479
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-472479 logs -n 25
E0127 12:58:40.371269 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-472479 logs -n 25: (1.313831502s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo docker                         | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo find                           | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo crio                           | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-956477                                     | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	| delete  | -p old-k8s-version-488586                            | old-k8s-version-488586 | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC | 27 Jan 25 12:57 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:48:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:48:45.061131 1790192 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:48:45.061460 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061507 1790192 out.go:358] Setting ErrFile to fd 2...
	I0127 12:48:45.061571 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061947 1790192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:48:45.062550 1790192 out.go:352] Setting JSON to false
	I0127 12:48:45.063760 1790192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34266,"bootTime":1737947859,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:48:45.063872 1790192 start.go:139] virtualization: kvm guest
	I0127 12:48:45.065969 1790192 out.go:177] * [bridge-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:48:45.067136 1790192 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:48:45.067134 1790192 notify.go:220] Checking for updates...
	I0127 12:48:45.068296 1790192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:48:45.069519 1790192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:48:45.070522 1790192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.071653 1790192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:48:45.072745 1790192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:48:45.074387 1790192 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074542 1790192 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074661 1790192 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:48:45.074797 1790192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:48:45.111354 1790192 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:48:45.112385 1790192 start.go:297] selected driver: kvm2
	I0127 12:48:45.112404 1790192 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:48:45.112417 1790192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:48:45.113111 1790192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.113192 1790192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:48:45.129191 1790192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:48:45.129247 1790192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:48:45.129509 1790192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:48:45.129542 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:48:45.129550 1790192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:48:45.129616 1790192 start.go:340] cluster config:
	{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:48:45.129762 1790192 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.131229 1790192 out.go:177] * Starting "bridge-956477" primary control-plane node in "bridge-956477" cluster
	I0127 12:48:45.132207 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:48:45.132243 1790192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:48:45.132258 1790192 cache.go:56] Caching tarball of preloaded images
	I0127 12:48:45.132337 1790192 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:48:45.132351 1790192 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:48:45.132455 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:48:45.132478 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json: {Name:mka55a4b4af7aaf9911ae593f9f5e3f84a3441e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:48:45.133024 1790192 start.go:360] acquireMachinesLock for bridge-956477: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:48:45.133083 1790192 start.go:364] duration metric: took 34.753µs to acquireMachinesLock for "bridge-956477"
	I0127 12:48:45.133110 1790192 start.go:93] Provisioning new machine with config: &{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:48:45.133187 1790192 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:48:45.134561 1790192 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:48:45.134690 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:48:45.134731 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:48:45.149509 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0127 12:48:45.150027 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:48:45.150619 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:48:45.150641 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:48:45.150972 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:48:45.151149 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:48:45.151259 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:48:45.151400 1790192 start.go:159] libmachine.API.Create for "bridge-956477" (driver="kvm2")
	I0127 12:48:45.151431 1790192 client.go:168] LocalClient.Create starting
	I0127 12:48:45.151462 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:48:45.151502 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151518 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151583 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:48:45.151607 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151621 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151653 1790192 main.go:141] libmachine: Running pre-create checks...
	I0127 12:48:45.151666 1790192 main.go:141] libmachine: (bridge-956477) Calling .PreCreateCheck
	I0127 12:48:45.152022 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:48:45.152404 1790192 main.go:141] libmachine: Creating machine...
	I0127 12:48:45.152417 1790192 main.go:141] libmachine: (bridge-956477) Calling .Create
	I0127 12:48:45.152533 1790192 main.go:141] libmachine: (bridge-956477) creating KVM machine...
	I0127 12:48:45.152554 1790192 main.go:141] libmachine: (bridge-956477) creating network...
	I0127 12:48:45.153709 1790192 main.go:141] libmachine: (bridge-956477) DBG | found existing default KVM network
	I0127 12:48:45.154981 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.154812 1790215 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:48:45.156047 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.155949 1790215 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:0f:53} reservation:<nil>}
	I0127 12:48:45.156973 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.156878 1790215 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:57:68} reservation:<nil>}
	I0127 12:48:45.158158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.158076 1790215 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039efc0}
	I0127 12:48:45.158183 1790192 main.go:141] libmachine: (bridge-956477) DBG | created network xml: 
	I0127 12:48:45.158196 1790192 main.go:141] libmachine: (bridge-956477) DBG | <network>
	I0127 12:48:45.158206 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <name>mk-bridge-956477</name>
	I0127 12:48:45.158211 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <dns enable='no'/>
	I0127 12:48:45.158215 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158222 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 12:48:45.158232 1790192 main.go:141] libmachine: (bridge-956477) DBG |     <dhcp>
	I0127 12:48:45.158241 1790192 main.go:141] libmachine: (bridge-956477) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 12:48:45.158250 1790192 main.go:141] libmachine: (bridge-956477) DBG |     </dhcp>
	I0127 12:48:45.158258 1790192 main.go:141] libmachine: (bridge-956477) DBG |   </ip>
	I0127 12:48:45.158266 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158275 1790192 main.go:141] libmachine: (bridge-956477) DBG | </network>
	I0127 12:48:45.158288 1790192 main.go:141] libmachine: (bridge-956477) DBG | 
	I0127 12:48:45.163152 1790192 main.go:141] libmachine: (bridge-956477) DBG | trying to create private KVM network mk-bridge-956477 192.168.72.0/24...
	I0127 12:48:45.234336 1790192 main.go:141] libmachine: (bridge-956477) DBG | private KVM network mk-bridge-956477 192.168.72.0/24 created
	I0127 12:48:45.234373 1790192 main.go:141] libmachine: (bridge-956477) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.234401 1790192 main.go:141] libmachine: (bridge-956477) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:48:45.234417 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.234378 1790215 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.234566 1790192 main.go:141] libmachine: (bridge-956477) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:48:45.542800 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.542627 1790215 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa...
	I0127 12:48:45.665840 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665684 1790215 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk...
	I0127 12:48:45.665878 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing magic tar header
	I0127 12:48:45.665895 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing SSH key tar header
	I0127 12:48:45.665905 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665802 1790215 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.665915 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 (perms=drwx------)
	I0127 12:48:45.665924 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477
	I0127 12:48:45.665934 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:48:45.665954 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.665963 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:48:45.665979 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:48:45.665993 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:48:45.666023 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:48:45.666045 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins
	I0127 12:48:45.666058 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:48:45.666069 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:48:45.666074 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:48:45.666085 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:45.666092 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home
	I0127 12:48:45.666099 1790192 main.go:141] libmachine: (bridge-956477) DBG | skipping /home - not owner
	I0127 12:48:45.667183 1790192 main.go:141] libmachine: (bridge-956477) define libvirt domain using xml: 
	I0127 12:48:45.667207 1790192 main.go:141] libmachine: (bridge-956477) <domain type='kvm'>
	I0127 12:48:45.667217 1790192 main.go:141] libmachine: (bridge-956477)   <name>bridge-956477</name>
	I0127 12:48:45.667225 1790192 main.go:141] libmachine: (bridge-956477)   <memory unit='MiB'>3072</memory>
	I0127 12:48:45.667233 1790192 main.go:141] libmachine: (bridge-956477)   <vcpu>2</vcpu>
	I0127 12:48:45.667241 1790192 main.go:141] libmachine: (bridge-956477)   <features>
	I0127 12:48:45.667252 1790192 main.go:141] libmachine: (bridge-956477)     <acpi/>
	I0127 12:48:45.667256 1790192 main.go:141] libmachine: (bridge-956477)     <apic/>
	I0127 12:48:45.667262 1790192 main.go:141] libmachine: (bridge-956477)     <pae/>
	I0127 12:48:45.667266 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667283 1790192 main.go:141] libmachine: (bridge-956477)   </features>
	I0127 12:48:45.667291 1790192 main.go:141] libmachine: (bridge-956477)   <cpu mode='host-passthrough'>
	I0127 12:48:45.667311 1790192 main.go:141] libmachine: (bridge-956477)   
	I0127 12:48:45.667327 1790192 main.go:141] libmachine: (bridge-956477)   </cpu>
	I0127 12:48:45.667351 1790192 main.go:141] libmachine: (bridge-956477)   <os>
	I0127 12:48:45.667372 1790192 main.go:141] libmachine: (bridge-956477)     <type>hvm</type>
	I0127 12:48:45.667389 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='cdrom'/>
	I0127 12:48:45.667405 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='hd'/>
	I0127 12:48:45.667416 1790192 main.go:141] libmachine: (bridge-956477)     <bootmenu enable='no'/>
	I0127 12:48:45.667423 1790192 main.go:141] libmachine: (bridge-956477)   </os>
	I0127 12:48:45.667433 1790192 main.go:141] libmachine: (bridge-956477)   <devices>
	I0127 12:48:45.667441 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='cdrom'>
	I0127 12:48:45.667452 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/boot2docker.iso'/>
	I0127 12:48:45.667459 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hdc' bus='scsi'/>
	I0127 12:48:45.667464 1790192 main.go:141] libmachine: (bridge-956477)       <readonly/>
	I0127 12:48:45.667470 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667480 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='disk'>
	I0127 12:48:45.667502 1790192 main.go:141] libmachine: (bridge-956477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:48:45.667514 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk'/>
	I0127 12:48:45.667519 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hda' bus='virtio'/>
	I0127 12:48:45.667527 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667531 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667537 1790192 main.go:141] libmachine: (bridge-956477)       <source network='mk-bridge-956477'/>
	I0127 12:48:45.667544 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667549 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667555 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667582 1790192 main.go:141] libmachine: (bridge-956477)       <source network='default'/>
	I0127 12:48:45.667600 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667613 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667621 1790192 main.go:141] libmachine: (bridge-956477)     <serial type='pty'>
	I0127 12:48:45.667633 1790192 main.go:141] libmachine: (bridge-956477)       <target port='0'/>
	I0127 12:48:45.667640 1790192 main.go:141] libmachine: (bridge-956477)     </serial>
	I0127 12:48:45.667651 1790192 main.go:141] libmachine: (bridge-956477)     <console type='pty'>
	I0127 12:48:45.667662 1790192 main.go:141] libmachine: (bridge-956477)       <target type='serial' port='0'/>
	I0127 12:48:45.667673 1790192 main.go:141] libmachine: (bridge-956477)     </console>
	I0127 12:48:45.667691 1790192 main.go:141] libmachine: (bridge-956477)     <rng model='virtio'>
	I0127 12:48:45.667705 1790192 main.go:141] libmachine: (bridge-956477)       <backend model='random'>/dev/random</backend>
	I0127 12:48:45.667714 1790192 main.go:141] libmachine: (bridge-956477)     </rng>
	I0127 12:48:45.667722 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667731 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667740 1790192 main.go:141] libmachine: (bridge-956477)   </devices>
	I0127 12:48:45.667749 1790192 main.go:141] libmachine: (bridge-956477) </domain>
	I0127 12:48:45.667765 1790192 main.go:141] libmachine: (bridge-956477) 
	I0127 12:48:45.672524 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:ac:62:83 in network default
	I0127 12:48:45.673006 1790192 main.go:141] libmachine: (bridge-956477) starting domain...
	I0127 12:48:45.673024 1790192 main.go:141] libmachine: (bridge-956477) ensuring networks are active...
	I0127 12:48:45.673031 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:45.673650 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network default is active
	I0127 12:48:45.673918 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network mk-bridge-956477 is active
	I0127 12:48:45.674443 1790192 main.go:141] libmachine: (bridge-956477) getting domain XML...
	I0127 12:48:45.675241 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:46.910072 1790192 main.go:141] libmachine: (bridge-956477) waiting for IP...
	I0127 12:48:46.910991 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:46.911503 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:46.911587 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:46.911518 1790215 retry.go:31] will retry after 215.854927ms: waiting for domain to come up
	I0127 12:48:47.128865 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.129422 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.129454 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.129389 1790215 retry.go:31] will retry after 345.744835ms: waiting for domain to come up
	I0127 12:48:47.476809 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.477321 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.477351 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.477304 1790215 retry.go:31] will retry after 387.587044ms: waiting for domain to come up
	I0127 12:48:47.867011 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.867519 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.867563 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.867512 1790215 retry.go:31] will retry after 564.938674ms: waiting for domain to come up
	I0127 12:48:48.434398 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:48.434970 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:48.434999 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:48.434928 1790215 retry.go:31] will retry after 628.439712ms: waiting for domain to come up
	I0127 12:48:49.064853 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.065323 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.065358 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.065288 1790215 retry.go:31] will retry after 745.70592ms: waiting for domain to come up
	I0127 12:48:49.813123 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.813748 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.813780 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.813723 1790215 retry.go:31] will retry after 1.074334161s: waiting for domain to come up
	I0127 12:48:50.889220 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:50.889785 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:50.889855 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:50.889789 1790215 retry.go:31] will retry after 1.318459201s: waiting for domain to come up
	I0127 12:48:52.210197 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:52.210618 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:52.210645 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:52.210599 1790215 retry.go:31] will retry after 1.764815725s: waiting for domain to come up
	I0127 12:48:53.976580 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:53.977130 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:53.977158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:53.977081 1790215 retry.go:31] will retry after 1.410873374s: waiting for domain to come up
	I0127 12:48:55.389480 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:55.389911 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:55.389944 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:55.389893 1790215 retry.go:31] will retry after 2.738916299s: waiting for domain to come up
	I0127 12:48:58.130207 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:58.130681 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:58.130707 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:58.130646 1790215 retry.go:31] will retry after 3.218706779s: waiting for domain to come up
	I0127 12:49:01.351430 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:01.351988 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:49:01.352019 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:49:01.351955 1790215 retry.go:31] will retry after 4.065804066s: waiting for domain to come up
	I0127 12:49:05.419663 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420108 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has current primary IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420160 1790192 main.go:141] libmachine: (bridge-956477) found domain IP: 192.168.72.28
	I0127 12:49:05.420175 1790192 main.go:141] libmachine: (bridge-956477) reserving static IP address...
	I0127 12:49:05.420595 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find host DHCP lease matching {name: "bridge-956477", mac: "52:54:00:49:99:d8", ip: "192.168.72.28"} in network mk-bridge-956477
	I0127 12:49:05.499266 1790192 main.go:141] libmachine: (bridge-956477) reserved static IP address 192.168.72.28 for domain bridge-956477
	I0127 12:49:05.499303 1790192 main.go:141] libmachine: (bridge-956477) waiting for SSH...
	I0127 12:49:05.499314 1790192 main.go:141] libmachine: (bridge-956477) DBG | Getting to WaitForSSH function...
	I0127 12:49:05.501992 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502523 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.502574 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502769 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH client type: external
	I0127 12:49:05.502798 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa (-rw-------)
	I0127 12:49:05.502836 1790192 main.go:141] libmachine: (bridge-956477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:49:05.502851 1790192 main.go:141] libmachine: (bridge-956477) DBG | About to run SSH command:
	I0127 12:49:05.502863 1790192 main.go:141] libmachine: (bridge-956477) DBG | exit 0
	I0127 12:49:05.630859 1790192 main.go:141] libmachine: (bridge-956477) DBG | SSH cmd err, output: <nil>: 
	I0127 12:49:05.631203 1790192 main.go:141] libmachine: (bridge-956477) KVM machine creation complete
	I0127 12:49:05.631537 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:05.632120 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632328 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632512 1790192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:49:05.632550 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:05.633838 1790192 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:49:05.633852 1790192 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:49:05.633858 1790192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:49:05.633864 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.635988 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636359 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.636387 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636482 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.636688 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636840 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636999 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.637148 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.637417 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.637432 1790192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:49:05.753913 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:05.753957 1790192 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:49:05.753969 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.757035 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757484 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.757521 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757749 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.757961 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758132 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758270 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.758481 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.758721 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.758739 1790192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:49:05.871011 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:49:05.871181 1790192 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:49:05.871198 1790192 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:49:05.871211 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871499 1790192 buildroot.go:166] provisioning hostname "bridge-956477"
	I0127 12:49:05.871532 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871711 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.874488 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.874941 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.874964 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.875152 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.875328 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875456 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875555 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.875684 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.875864 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.875875 1790192 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-956477 && echo "bridge-956477" | sudo tee /etc/hostname
	I0127 12:49:05.999963 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-956477
	
	I0127 12:49:06.000010 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.002594 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003041 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.003070 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003263 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.003462 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003628 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003746 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.003889 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.004099 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.004116 1790192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-956477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-956477/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-956477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:49:06.126689 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:06.126724 1790192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:49:06.126788 1790192 buildroot.go:174] setting up certificates
	I0127 12:49:06.126798 1790192 provision.go:84] configureAuth start
	I0127 12:49:06.126811 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:06.127071 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.129597 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.129936 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.129956 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.130134 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.132135 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132428 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.132453 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132601 1790192 provision.go:143] copyHostCerts
	I0127 12:49:06.132670 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:49:06.132693 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:49:06.132778 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:49:06.132883 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:49:06.132896 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:49:06.132941 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:49:06.133012 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:49:06.133023 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:49:06.133056 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:49:06.133127 1790192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.bridge-956477 san=[127.0.0.1 192.168.72.28 bridge-956477 localhost minikube]
	I0127 12:49:06.244065 1790192 provision.go:177] copyRemoteCerts
	I0127 12:49:06.244134 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:49:06.244179 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.247068 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247401 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.247439 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247543 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.247734 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.247886 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.248045 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.332164 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:49:06.355222 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:49:06.377606 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:49:06.400935 1790192 provision.go:87] duration metric: took 274.121357ms to configureAuth
	I0127 12:49:06.400966 1790192 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:49:06.401190 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:06.401304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.403876 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404282 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.404311 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404522 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.404717 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.404875 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.405024 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.405242 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.405432 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.405453 1790192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:49:06.632004 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:49:06.632052 1790192 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:49:06.632066 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetURL
	I0127 12:49:06.633455 1790192 main.go:141] libmachine: (bridge-956477) DBG | using libvirt version 6000000
	I0127 12:49:06.635940 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636296 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.636319 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636439 1790192 main.go:141] libmachine: Docker is up and running!
	I0127 12:49:06.636466 1790192 main.go:141] libmachine: Reticulating splines...
	I0127 12:49:06.636474 1790192 client.go:171] duration metric: took 21.485034654s to LocalClient.Create
	I0127 12:49:06.636493 1790192 start.go:167] duration metric: took 21.485094344s to libmachine.API.Create "bridge-956477"
	I0127 12:49:06.636508 1790192 start.go:293] postStartSetup for "bridge-956477" (driver="kvm2")
	I0127 12:49:06.636525 1790192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:49:06.636556 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.636838 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:49:06.636862 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.639069 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639386 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.639422 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639563 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.639752 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.639929 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.640062 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.724850 1790192 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:49:06.729112 1790192 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:49:06.729134 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:49:06.729192 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:49:06.729293 1790192 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:49:06.729434 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:49:06.738467 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:06.761545 1790192 start.go:296] duration metric: took 125.019791ms for postStartSetup
	I0127 12:49:06.761593 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:06.762205 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.765437 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.765808 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.765828 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.766138 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:49:06.766350 1790192 start.go:128] duration metric: took 21.63314943s to createHost
	I0127 12:49:06.766380 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.768832 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769141 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.769168 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769330 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.769547 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769745 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769899 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.770075 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.770262 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.770272 1790192 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:49:06.887120 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737982146.857755472
	
	I0127 12:49:06.887157 1790192 fix.go:216] guest clock: 1737982146.857755472
	I0127 12:49:06.887177 1790192 fix.go:229] Guest: 2025-01-27 12:49:06.857755472 +0000 UTC Remote: 2025-01-27 12:49:06.76636518 +0000 UTC m=+21.744166745 (delta=91.390292ms)
	I0127 12:49:06.887213 1790192 fix.go:200] guest clock delta is within tolerance: 91.390292ms
	I0127 12:49:06.887222 1790192 start.go:83] releasing machines lock for "bridge-956477", held for 21.754125785s
	I0127 12:49:06.887266 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.887556 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.890291 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890686 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.890715 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890834 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891309 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891479 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891572 1790192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:49:06.891614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.891715 1790192 ssh_runner.go:195] Run: cat /version.json
	I0127 12:49:06.891742 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.894127 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894492 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.894531 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894720 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894976 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895300 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.895305 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.895579 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.895614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.895836 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895831 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.896003 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.896190 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.896366 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:07.014147 1790192 ssh_runner.go:195] Run: systemctl --version
	I0127 12:49:07.020023 1790192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:49:07.181331 1790192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:49:07.186863 1790192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:49:07.186954 1790192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:49:07.203385 1790192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:49:07.203419 1790192 start.go:495] detecting cgroup driver to use...
	I0127 12:49:07.203478 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:49:07.218431 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:49:07.231459 1790192 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:49:07.231505 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:49:07.244939 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:49:07.257985 1790192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:49:07.382245 1790192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:49:07.544971 1790192 docker.go:233] disabling docker service ...
	I0127 12:49:07.545044 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:49:07.559296 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:49:07.572107 1790192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:49:07.710722 1790192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:49:07.842352 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:49:07.856902 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:49:07.873833 1790192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:49:07.873895 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.883449 1790192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:49:07.883540 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.893268 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.902934 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.913200 1790192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:49:07.923183 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.932933 1790192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.948940 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.958726 1790192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:49:07.967409 1790192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:49:07.967473 1790192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:49:07.979872 1790192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:49:07.988693 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:08.106626 1790192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:49:08.190261 1790192 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:49:08.190341 1790192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:49:08.195228 1790192 start.go:563] Will wait 60s for crictl version
	I0127 12:49:08.195312 1790192 ssh_runner.go:195] Run: which crictl
	I0127 12:49:08.198797 1790192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:49:08.237887 1790192 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:49:08.238012 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.263030 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.290320 1790192 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:49:08.291370 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:08.294322 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294643 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:08.294675 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294858 1790192 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 12:49:08.298640 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:08.311920 1790192 kubeadm.go:883] updating cluster {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:49:08.312091 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:49:08.312156 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:08.343416 1790192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:49:08.343484 1790192 ssh_runner.go:195] Run: which lz4
	I0127 12:49:08.347177 1790192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:49:08.351091 1790192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:49:08.351126 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:49:09.560777 1790192 crio.go:462] duration metric: took 1.213632525s to copy over tarball
	I0127 12:49:09.560892 1790192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:49:11.737884 1790192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176958842s)
	I0127 12:49:11.737916 1790192 crio.go:469] duration metric: took 2.177103692s to extract the tarball
	I0127 12:49:11.737927 1790192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:49:11.774005 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:11.812704 1790192 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:49:11.812729 1790192 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:49:11.812737 1790192 kubeadm.go:934] updating node { 192.168.72.28 8443 v1.32.1 crio true true} ...
	I0127 12:49:11.812874 1790192 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-956477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 12:49:11.812971 1790192 ssh_runner.go:195] Run: crio config
	I0127 12:49:11.868174 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:11.868200 1790192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:49:11.868222 1790192 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-956477 NodeName:bridge-956477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:49:11.868356 1790192 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-956477"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:49:11.868420 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:49:11.877576 1790192 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:49:11.877641 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:49:11.886156 1790192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 12:49:11.901855 1790192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:49:11.917311 1790192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:49:11.933025 1790192 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0127 12:49:11.936616 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:11.948439 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:12.060451 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:12.076612 1790192 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477 for IP: 192.168.72.28
	I0127 12:49:12.076638 1790192 certs.go:194] generating shared ca certs ...
	I0127 12:49:12.076680 1790192 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.076872 1790192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:49:12.076941 1790192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:49:12.076955 1790192 certs.go:256] generating profile certs ...
	I0127 12:49:12.077065 1790192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key
	I0127 12:49:12.077096 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt with IP's: []
	I0127 12:49:12.388180 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt ...
	I0127 12:49:12.388212 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: {Name:mk35e754849912c2ccbef7aee78a8cb664d71760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393143 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key ...
	I0127 12:49:12.393176 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key: {Name:mk1a4eb1684f2df27d8a0393e4c3ccce9e3de875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393803 1790192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9
	I0127 12:49:12.393834 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.28]
	I0127 12:49:12.504705 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 ...
	I0127 12:49:12.504741 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9: {Name:mkc470d67580d2e81bf8ee097c21f9b4e89d97ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.504924 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 ...
	I0127 12:49:12.504944 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9: {Name:mkfe8a7bf14247bc7909277acbea55dbda14424f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.505661 1790192 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt
	I0127 12:49:12.505776 1790192 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key
	I0127 12:49:12.505863 1790192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key
	I0127 12:49:12.505887 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt with IP's: []
	I0127 12:49:12.609829 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt ...
	I0127 12:49:12.609856 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt: {Name:mk6cb77c1a7b511e7130b2dd7423c6ba9c6d37ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.610644 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key ...
	I0127 12:49:12.610664 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key: {Name:mkd90fcc60d00c9236b383668f8a16c0de9554e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.614971 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:49:12.615016 1790192 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:49:12.615026 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:49:12.615065 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:49:12.615119 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:49:12.615159 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:49:12.615202 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:12.615902 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:49:12.642386 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:49:12.667109 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:49:12.688637 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:49:12.711307 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:49:12.732852 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:49:12.756599 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:49:12.812442 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:49:12.836060 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:49:12.857115 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:49:12.879108 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:49:12.900872 1790192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:49:12.917407 1790192 ssh_runner.go:195] Run: openssl version
	I0127 12:49:12.922608 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:49:12.933376 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937409 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937451 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.942881 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:49:12.953628 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:49:12.964554 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968534 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968581 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.973893 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:49:12.984546 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:49:12.994913 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998791 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998841 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:13.003870 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:49:13.013262 1790192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:49:13.016784 1790192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:49:13.016833 1790192 kubeadm.go:392] StartCluster: {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:49:13.016911 1790192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:49:13.016987 1790192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:49:13.050812 1790192 cri.go:89] found id: ""
	I0127 12:49:13.050889 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:49:13.059865 1790192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:49:13.068783 1790192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:49:13.077676 1790192 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:49:13.077698 1790192 kubeadm.go:157] found existing configuration files:
	
	I0127 12:49:13.077743 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:49:13.086826 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:49:13.086886 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:49:13.096763 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:49:13.106090 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:49:13.106152 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:49:13.115056 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.123311 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:49:13.123381 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.134697 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:49:13.145287 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:49:13.145360 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:49:13.156930 1790192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:49:13.215215 1790192 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:49:13.215384 1790192 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:49:13.321518 1790192 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:49:13.321678 1790192 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:49:13.321803 1790192 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:49:13.332363 1790192 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:49:13.473799 1790192 out.go:235]   - Generating certificates and keys ...
	I0127 12:49:13.473979 1790192 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:49:13.474081 1790192 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:49:13.685866 1790192 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:49:13.770778 1790192 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:49:14.148126 1790192 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:49:14.239549 1790192 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:49:14.286201 1790192 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:49:14.286341 1790192 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.383724 1790192 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:49:14.383950 1790192 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.501996 1790192 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:49:14.665536 1790192 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:49:14.804446 1790192 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:49:14.804529 1790192 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:49:14.897657 1790192 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:49:14.966489 1790192 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:49:15.104336 1790192 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:49:15.164491 1790192 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:49:15.350906 1790192 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:49:15.351563 1790192 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:49:15.354014 1790192 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:49:15.355551 1790192 out.go:235]   - Booting up control plane ...
	I0127 12:49:15.355691 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:49:15.355786 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:49:15.356057 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:49:15.370685 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:49:15.376916 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:49:15.377006 1790192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:49:15.515590 1790192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:49:15.515750 1790192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:49:16.516381 1790192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001998745s
	I0127 12:49:16.516512 1790192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:49:21.514222 1790192 kubeadm.go:310] [api-check] The API server is healthy after 5.001594227s
	I0127 12:49:21.532591 1790192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:49:21.554627 1790192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:49:21.596778 1790192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:49:21.597017 1790192 kubeadm.go:310] [mark-control-plane] Marking the node bridge-956477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:49:21.613382 1790192 kubeadm.go:310] [bootstrap-token] Using token: y217q3.atj9ddkanm9dqcqt
	I0127 12:49:21.614522 1790192 out.go:235]   - Configuring RBAC rules ...
	I0127 12:49:21.614665 1790192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:49:21.626049 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:49:21.635045 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:49:21.642711 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:49:21.646716 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:49:21.650577 1790192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:49:21.921382 1790192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:49:22.339910 1790192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:49:22.920294 1790192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:49:22.921302 1790192 kubeadm.go:310] 
	I0127 12:49:22.921394 1790192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:49:22.921411 1790192 kubeadm.go:310] 
	I0127 12:49:22.921499 1790192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:49:22.921508 1790192 kubeadm.go:310] 
	I0127 12:49:22.921542 1790192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:49:22.921642 1790192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:49:22.921726 1790192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:49:22.921741 1790192 kubeadm.go:310] 
	I0127 12:49:22.921806 1790192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:49:22.921817 1790192 kubeadm.go:310] 
	I0127 12:49:22.921886 1790192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:49:22.921897 1790192 kubeadm.go:310] 
	I0127 12:49:22.921961 1790192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:49:22.922086 1790192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:49:22.922181 1790192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:49:22.922191 1790192 kubeadm.go:310] 
	I0127 12:49:22.922311 1790192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:49:22.922407 1790192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:49:22.922421 1790192 kubeadm.go:310] 
	I0127 12:49:22.922529 1790192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922664 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:49:22.922701 1790192 kubeadm.go:310] 	--control-plane 
	I0127 12:49:22.922707 1790192 kubeadm.go:310] 
	I0127 12:49:22.922801 1790192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:49:22.922809 1790192 kubeadm.go:310] 
	I0127 12:49:22.922871 1790192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922996 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:49:22.923821 1790192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:49:22.924014 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:22.926262 1790192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:49:22.927449 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:49:22.937784 1790192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:49:22.955872 1790192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:49:22.955954 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:22.956000 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-956477 minikube.k8s.io/updated_at=2025_01_27T12_49_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=bridge-956477 minikube.k8s.io/primary=true
	I0127 12:49:22.984921 1790192 ops.go:34] apiserver oom_adj: -16
	I0127 12:49:23.101816 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:23.602076 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.102582 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.601942 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.102360 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.602350 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.102161 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.602794 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.102526 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.237160 1790192 kubeadm.go:1113] duration metric: took 4.281277151s to wait for elevateKubeSystemPrivileges
	I0127 12:49:27.237200 1790192 kubeadm.go:394] duration metric: took 14.220369926s to StartCluster
	I0127 12:49:27.237228 1790192 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.237320 1790192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:49:27.238783 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.239069 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:49:27.239072 1790192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:49:27.239175 1790192 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:49:27.239310 1790192 addons.go:69] Setting storage-provisioner=true in profile "bridge-956477"
	I0127 12:49:27.239320 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:27.239330 1790192 addons.go:238] Setting addon storage-provisioner=true in "bridge-956477"
	I0127 12:49:27.239333 1790192 addons.go:69] Setting default-storageclass=true in profile "bridge-956477"
	I0127 12:49:27.239365 1790192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-956477"
	I0127 12:49:27.239371 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.239830 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239873 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.239917 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239957 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.240680 1790192 out.go:177] * Verifying Kubernetes components...
	I0127 12:49:27.241931 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:27.261385 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0127 12:49:27.261452 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0127 12:49:27.261810 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262003 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262389 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262417 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262543 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262563 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262767 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262952 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262989 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.263506 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.263537 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.266688 1790192 addons.go:238] Setting addon default-storageclass=true in "bridge-956477"
	I0127 12:49:27.266732 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.267120 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.267168 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.278963 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0127 12:49:27.279421 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.279976 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.279999 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.280431 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.280692 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.282702 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0127 12:49:27.282845 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.283179 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.283627 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.283649 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.283978 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.284748 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.284785 1790192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:49:27.284797 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.285956 1790192 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.285977 1790192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:49:27.286001 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.288697 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289087 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.289110 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.289459 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.289574 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.289669 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.301672 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0127 12:49:27.302317 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.302925 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.302949 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.303263 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.303488 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.305258 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.305479 1790192 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:27.305497 1790192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:49:27.305517 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.308750 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309243 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.309269 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309409 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.309585 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.309726 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.309875 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.500640 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:27.500778 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:49:27.538353 1790192 node_ready.go:35] waiting up to 15m0s for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548400 1790192 node_ready.go:49] node "bridge-956477" has status "Ready":"True"
	I0127 12:49:27.548443 1790192 node_ready.go:38] duration metric: took 10.053639ms for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548459 1790192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:27.564271 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:27.632137 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.647091 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:28.184542 1790192 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 12:49:28.549638 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.549663 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550103 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550127 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550137 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550144 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550198 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550409 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550429 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550443 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550800 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550816 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551057 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551076 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.551081 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.551085 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.551098 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551316 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551331 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575614 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.575665 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.575924 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.575979 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575978 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.577474 1790192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:49:28.578591 1790192 addons.go:514] duration metric: took 1.33943345s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:49:28.695806 1790192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-956477" context rescaled to 1 replicas
	I0127 12:49:29.570116 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:31.570640 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:33.572383 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:34.570677 1790192 pod_ready.go:98] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.28 HostIPs:[{IP:192.168.72.
28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021ef1f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570712 1790192 pod_ready.go:82] duration metric: took 7.006412478s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	E0127 12:49:34.570726 1790192 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
2.28 HostIPs:[{IP:192.168.72.28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0021ef1f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570736 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575210 1790192 pod_ready.go:93] pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:34.575232 1790192 pod_ready.go:82] duration metric: took 4.46563ms for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575241 1790192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082910 1790192 pod_ready.go:93] pod "etcd-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.082952 1790192 pod_ready.go:82] duration metric: took 1.507702821s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082968 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086925 1790192 pod_ready.go:93] pod "kube-apiserver-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.086953 1790192 pod_ready.go:82] duration metric: took 3.975819ms for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086969 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091952 1790192 pod_ready.go:93] pod "kube-controller-manager-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.091969 1790192 pod_ready.go:82] duration metric: took 4.993389ms for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091978 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170654 1790192 pod_ready.go:93] pod "kube-proxy-8fw2n" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.170678 1790192 pod_ready.go:82] duration metric: took 78.694605ms for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170688 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.568993 1790192 pod_ready.go:93] pod "kube-scheduler-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.569019 1790192 pod_ready.go:82] duration metric: took 398.324568ms for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.569029 1790192 pod_ready.go:39] duration metric: took 9.020555356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:36.569047 1790192 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:49:36.569110 1790192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:49:36.585221 1790192 api_server.go:72] duration metric: took 9.346111182s to wait for apiserver process to appear ...
	I0127 12:49:36.585260 1790192 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:49:36.585284 1790192 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0127 12:49:36.592716 1790192 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0127 12:49:36.594292 1790192 api_server.go:141] control plane version: v1.32.1
	I0127 12:49:36.594316 1790192 api_server.go:131] duration metric: took 9.04907ms to wait for apiserver health ...
	I0127 12:49:36.594325 1790192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:49:36.771302 1790192 system_pods.go:59] 7 kube-system pods found
	I0127 12:49:36.771341 1790192 system_pods.go:61] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:36.771347 1790192 system_pods.go:61] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:36.771353 1790192 system_pods.go:61] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:36.771358 1790192 system_pods.go:61] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:36.771363 1790192 system_pods.go:61] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:36.771368 1790192 system_pods.go:61] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:36.771372 1790192 system_pods.go:61] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:36.771382 1790192 system_pods.go:74] duration metric: took 177.049643ms to wait for pod list to return data ...
	I0127 12:49:36.771394 1790192 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:49:36.969860 1790192 default_sa.go:45] found service account: "default"
	I0127 12:49:36.969891 1790192 default_sa.go:55] duration metric: took 198.486144ms for default service account to be created ...
	I0127 12:49:36.969903 1790192 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:49:37.173813 1790192 system_pods.go:87] 7 kube-system pods found
	I0127 12:49:37.370364 1790192 system_pods.go:105] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:37.370390 1790192 system_pods.go:105] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:37.370396 1790192 system_pods.go:105] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:37.370401 1790192 system_pods.go:105] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:37.370407 1790192 system_pods.go:105] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:37.370411 1790192 system_pods.go:105] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:37.370415 1790192 system_pods.go:105] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:37.370423 1790192 system_pods.go:147] duration metric: took 400.513222ms to wait for k8s-apps to be running ...
	I0127 12:49:37.370430 1790192 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:49:37.370476 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:49:37.386578 1790192 system_svc.go:56] duration metric: took 16.134406ms WaitForService to wait for kubelet
	I0127 12:49:37.386609 1790192 kubeadm.go:582] duration metric: took 10.147508217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:49:37.386628 1790192 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:49:37.570387 1790192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:49:37.570420 1790192 node_conditions.go:123] node cpu capacity is 2
	I0127 12:49:37.570439 1790192 node_conditions.go:105] duration metric: took 183.805809ms to run NodePressure ...
	I0127 12:49:37.570455 1790192 start.go:241] waiting for startup goroutines ...
	I0127 12:49:37.570466 1790192 start.go:246] waiting for cluster config update ...
	I0127 12:49:37.570478 1790192 start.go:255] writing updated cluster config ...
	I0127 12:49:37.570833 1790192 ssh_runner.go:195] Run: rm -f paused
	I0127 12:49:37.621383 1790192 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:49:37.623996 1790192 out.go:177] * Done! kubectl is now configured to use "bridge-956477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.646410349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720646378369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2abf5b7-9308-4d33-a9d4-8d821eb8963b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.647336772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89071221-0005-41c7-afec-293d091a2033 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.647425076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89071221-0005-41c7-afec-293d091a2033 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.647864622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f,PodSandboxId:5133fe3ac577aff7a86613ac98fb7ac0f8534bb9f2d614f859ca5cb4a0782d8e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982431691684654,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-cgv7x,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3f52f406-9235-4bc3-86e2-46436d7d5fae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43a60877ae821e20ac4fe8fe0ba85e6a3e3f7a4d83f1932348b6e05e91f939e,PodSandboxId:824c73c6b9c005b904668a4830d11b2662a0d4a38ce3b4bdde83c75605a87d0f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981469812255884,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wpmmw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 3db6fada-ced3-4afa-a959-301b355cd64b,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73610d1234ca3435323d37a384c78ec594ef79a665d353d70eea9d0cf7c191c3,PodSandboxId:e78639386211b0b39403e4079b278dcc4b6791dec903ecb99bd64eeab0843bef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981457046055206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e068a21-4866-4db6-a5cf-736f52620cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9327a12617e4b3540c1f9b26c01d617308d5b9afd5e3fbae1df01a56d989664e,PodSandboxId:780b65255a76e3d261e1826eadf18885a7d3fbc556d652e667eaafdf90462588,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981456069774035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9plpt,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 7460a186-1436-4dcb-b065-73d918b87428,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa321fb842b90367df90d9189021142aa9ff4a0c8435d84f4a7336a7c1460f6d,PodSandboxId:ecfb7df30bb93f140d9718af6eadd41a7b42b74161a71509e87c022f556261ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981455975309258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cttbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5feb255b-3898-42bd-9419-dce1a016e154,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9e93cf9670fe8752d19b5a86bed7774e1f2bd468dc1410ca2de8dbf187209a,PodSandboxId:291c79ba96add4401cce72039273dca15ef9c78205829366700ced619b168824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981454988533542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856e6e4e-7fab-4ba7-9236-2d98705e6431,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0d63f98dedd7f1e539fdc114ad937eb10508b8ff68238e2b8c62c46ee8c851,PodSandboxId:7198b72e1baeec89644d5ca47b6dc30037578e81b5b7251865fabeb828131629,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981444423960785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2ddda0c734b32a7b1bd165e1a761e9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7c9e66cd289c57842d9582c62ce038a7afb3d3cdec6d45774c8106516d72e7,PodSandboxId:cb2f5ebd30c1b50715166e9ef5ed05dec351779572fbaaae25fe7e0759e9ba20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe555
6394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981444379446752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e35b15427e66982e212309633d7d512497152c5625f7c0a4606193133e948,PodSandboxId:28dce7fe48151a7e718a320cf0bdf62b073c8226d11e0aff52e6df4f35abb86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf95
9e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981444388565836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a8a159d0547198b85e9c046cc803b22,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6971eec3af1b0f4f887fbee3100901645ede2ab615d15076831747eecb084228,PodSandboxId:c6db1e93d5dc9af3f422e428cd758a6959e059eb4c489973059031d39f5025ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981444343750875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae2802f49dac70492cf7e0f9886c181,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8d09d18cf8e33807df00f261b97638875851b35f131a920d6d5b0a25625b152,PodSandboxId:338062bcba011056e288e776dc2feb28edafed96143f1a10264755f4e455bc47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981161191350192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89071221-0005-41c7-afec-293d091a2033 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.682020619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b068b99-1b0c-454f-8a41-823d97fca04a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.682100575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b068b99-1b0c-454f-8a41-823d97fca04a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.682949788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b023afd8-a3a6-4ed0-85cd-f6f70926b6e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.683337606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720683317684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b023afd8-a3a6-4ed0-85cd-f6f70926b6e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.684380391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b81e6f3-8829-4cbe-a753-b7c5d6409c42 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.684509130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b81e6f3-8829-4cbe-a753-b7c5d6409c42 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.684832361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f,PodSandboxId:5133fe3ac577aff7a86613ac98fb7ac0f8534bb9f2d614f859ca5cb4a0782d8e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982431691684654,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-cgv7x,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3f52f406-9235-4bc3-86e2-46436d7d5fae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43a60877ae821e20ac4fe8fe0ba85e6a3e3f7a4d83f1932348b6e05e91f939e,PodSandboxId:824c73c6b9c005b904668a4830d11b2662a0d4a38ce3b4bdde83c75605a87d0f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981469812255884,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wpmmw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 3db6fada-ced3-4afa-a959-301b355cd64b,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73610d1234ca3435323d37a384c78ec594ef79a665d353d70eea9d0cf7c191c3,PodSandboxId:e78639386211b0b39403e4079b278dcc4b6791dec903ecb99bd64eeab0843bef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981457046055206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e068a21-4866-4db6-a5cf-736f52620cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9327a12617e4b3540c1f9b26c01d617308d5b9afd5e3fbae1df01a56d989664e,PodSandboxId:780b65255a76e3d261e1826eadf18885a7d3fbc556d652e667eaafdf90462588,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981456069774035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9plpt,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 7460a186-1436-4dcb-b065-73d918b87428,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa321fb842b90367df90d9189021142aa9ff4a0c8435d84f4a7336a7c1460f6d,PodSandboxId:ecfb7df30bb93f140d9718af6eadd41a7b42b74161a71509e87c022f556261ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981455975309258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cttbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5feb255b-3898-42bd-9419-dce1a016e154,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9e93cf9670fe8752d19b5a86bed7774e1f2bd468dc1410ca2de8dbf187209a,PodSandboxId:291c79ba96add4401cce72039273dca15ef9c78205829366700ced619b168824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981454988533542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856e6e4e-7fab-4ba7-9236-2d98705e6431,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0d63f98dedd7f1e539fdc114ad937eb10508b8ff68238e2b8c62c46ee8c851,PodSandboxId:7198b72e1baeec89644d5ca47b6dc30037578e81b5b7251865fabeb828131629,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981444423960785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2ddda0c734b32a7b1bd165e1a761e9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7c9e66cd289c57842d9582c62ce038a7afb3d3cdec6d45774c8106516d72e7,PodSandboxId:cb2f5ebd30c1b50715166e9ef5ed05dec351779572fbaaae25fe7e0759e9ba20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe555
6394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981444379446752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e35b15427e66982e212309633d7d512497152c5625f7c0a4606193133e948,PodSandboxId:28dce7fe48151a7e718a320cf0bdf62b073c8226d11e0aff52e6df4f35abb86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf95
9e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981444388565836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a8a159d0547198b85e9c046cc803b22,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6971eec3af1b0f4f887fbee3100901645ede2ab615d15076831747eecb084228,PodSandboxId:c6db1e93d5dc9af3f422e428cd758a6959e059eb4c489973059031d39f5025ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981444343750875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae2802f49dac70492cf7e0f9886c181,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8d09d18cf8e33807df00f261b97638875851b35f131a920d6d5b0a25625b152,PodSandboxId:338062bcba011056e288e776dc2feb28edafed96143f1a10264755f4e455bc47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981161191350192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b81e6f3-8829-4cbe-a753-b7c5d6409c42 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.716172205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ca42e93-6b1f-4c63-9f94-1b19df78975d name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.716254887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ca42e93-6b1f-4c63-9f94-1b19df78975d name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.717539981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d5c3b62-2479-4355-8ee8-805a50520fe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.717961935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720717941540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d5c3b62-2479-4355-8ee8-805a50520fe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.718816161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c69b37ed-48df-4607-bfda-ec87fe0d4eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.718901420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c69b37ed-48df-4607-bfda-ec87fe0d4eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.723136735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f,PodSandboxId:5133fe3ac577aff7a86613ac98fb7ac0f8534bb9f2d614f859ca5cb4a0782d8e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982431691684654,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-cgv7x,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3f52f406-9235-4bc3-86e2-46436d7d5fae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43a60877ae821e20ac4fe8fe0ba85e6a3e3f7a4d83f1932348b6e05e91f939e,PodSandboxId:824c73c6b9c005b904668a4830d11b2662a0d4a38ce3b4bdde83c75605a87d0f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981469812255884,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wpmmw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 3db6fada-ced3-4afa-a959-301b355cd64b,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73610d1234ca3435323d37a384c78ec594ef79a665d353d70eea9d0cf7c191c3,PodSandboxId:e78639386211b0b39403e4079b278dcc4b6791dec903ecb99bd64eeab0843bef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981457046055206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e068a21-4866-4db6-a5cf-736f52620cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9327a12617e4b3540c1f9b26c01d617308d5b9afd5e3fbae1df01a56d989664e,PodSandboxId:780b65255a76e3d261e1826eadf18885a7d3fbc556d652e667eaafdf90462588,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981456069774035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9plpt,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 7460a186-1436-4dcb-b065-73d918b87428,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa321fb842b90367df90d9189021142aa9ff4a0c8435d84f4a7336a7c1460f6d,PodSandboxId:ecfb7df30bb93f140d9718af6eadd41a7b42b74161a71509e87c022f556261ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981455975309258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cttbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5feb255b-3898-42bd-9419-dce1a016e154,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9e93cf9670fe8752d19b5a86bed7774e1f2bd468dc1410ca2de8dbf187209a,PodSandboxId:291c79ba96add4401cce72039273dca15ef9c78205829366700ced619b168824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981454988533542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856e6e4e-7fab-4ba7-9236-2d98705e6431,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0d63f98dedd7f1e539fdc114ad937eb10508b8ff68238e2b8c62c46ee8c851,PodSandboxId:7198b72e1baeec89644d5ca47b6dc30037578e81b5b7251865fabeb828131629,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981444423960785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2ddda0c734b32a7b1bd165e1a761e9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7c9e66cd289c57842d9582c62ce038a7afb3d3cdec6d45774c8106516d72e7,PodSandboxId:cb2f5ebd30c1b50715166e9ef5ed05dec351779572fbaaae25fe7e0759e9ba20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe555
6394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981444379446752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e35b15427e66982e212309633d7d512497152c5625f7c0a4606193133e948,PodSandboxId:28dce7fe48151a7e718a320cf0bdf62b073c8226d11e0aff52e6df4f35abb86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf95
9e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981444388565836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a8a159d0547198b85e9c046cc803b22,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6971eec3af1b0f4f887fbee3100901645ede2ab615d15076831747eecb084228,PodSandboxId:c6db1e93d5dc9af3f422e428cd758a6959e059eb4c489973059031d39f5025ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981444343750875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae2802f49dac70492cf7e0f9886c181,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8d09d18cf8e33807df00f261b97638875851b35f131a920d6d5b0a25625b152,PodSandboxId:338062bcba011056e288e776dc2feb28edafed96143f1a10264755f4e455bc47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981161191350192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c69b37ed-48df-4607-bfda-ec87fe0d4eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.762197494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a382cc0-c5bb-4394-af8f-9f1246da586f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.762281063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a382cc0-c5bb-4394-af8f-9f1246da586f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.763187380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3d8fd21-6ef0-47d4-b996-7085996be693 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.763549836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720763528960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3d8fd21-6ef0-47d4-b996-7085996be693 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.764070368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c4a611e-9782-49f0-b6ed-9a4112c35533 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.764133109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c4a611e-9782-49f0-b6ed-9a4112c35533 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:58:40 no-preload-472479 crio[729]: time="2025-01-27 12:58:40.764362309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f,PodSandboxId:5133fe3ac577aff7a86613ac98fb7ac0f8534bb9f2d614f859ca5cb4a0782d8e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982431691684654,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-cgv7x,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3f52f406-9235-4bc3-86e2-46436d7d5fae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43a60877ae821e20ac4fe8fe0ba85e6a3e3f7a4d83f1932348b6e05e91f939e,PodSandboxId:824c73c6b9c005b904668a4830d11b2662a0d4a38ce3b4bdde83c75605a87d0f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981469812255884,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wpmmw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 3db6fada-ced3-4afa-a959-301b355cd64b,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73610d1234ca3435323d37a384c78ec594ef79a665d353d70eea9d0cf7c191c3,PodSandboxId:e78639386211b0b39403e4079b278dcc4b6791dec903ecb99bd64eeab0843bef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981457046055206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e068a21-4866-4db6-a5cf-736f52620cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9327a12617e4b3540c1f9b26c01d617308d5b9afd5e3fbae1df01a56d989664e,PodSandboxId:780b65255a76e3d261e1826eadf18885a7d3fbc556d652e667eaafdf90462588,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981456069774035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9plpt,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 7460a186-1436-4dcb-b065-73d918b87428,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa321fb842b90367df90d9189021142aa9ff4a0c8435d84f4a7336a7c1460f6d,PodSandboxId:ecfb7df30bb93f140d9718af6eadd41a7b42b74161a71509e87c022f556261ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981455975309258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cttbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5feb255b-3898-42bd-9419-dce1a016e154,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9e93cf9670fe8752d19b5a86bed7774e1f2bd468dc1410ca2de8dbf187209a,PodSandboxId:291c79ba96add4401cce72039273dca15ef9c78205829366700ced619b168824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981454988533542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856e6e4e-7fab-4ba7-9236-2d98705e6431,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0d63f98dedd7f1e539fdc114ad937eb10508b8ff68238e2b8c62c46ee8c851,PodSandboxId:7198b72e1baeec89644d5ca47b6dc30037578e81b5b7251865fabeb828131629,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981444423960785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2ddda0c734b32a7b1bd165e1a761e9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7c9e66cd289c57842d9582c62ce038a7afb3d3cdec6d45774c8106516d72e7,PodSandboxId:cb2f5ebd30c1b50715166e9ef5ed05dec351779572fbaaae25fe7e0759e9ba20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe555
6394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981444379446752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e35b15427e66982e212309633d7d512497152c5625f7c0a4606193133e948,PodSandboxId:28dce7fe48151a7e718a320cf0bdf62b073c8226d11e0aff52e6df4f35abb86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf95
9e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981444388565836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a8a159d0547198b85e9c046cc803b22,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6971eec3af1b0f4f887fbee3100901645ede2ab615d15076831747eecb084228,PodSandboxId:c6db1e93d5dc9af3f422e428cd758a6959e059eb4c489973059031d39f5025ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981444343750875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae2802f49dac70492cf7e0f9886c181,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8d09d18cf8e33807df00f261b97638875851b35f131a920d6d5b0a25625b152,PodSandboxId:338062bcba011056e288e776dc2feb28edafed96143f1a10264755f4e455bc47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981161191350192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7608113c1a240a84f437776bce7a4e5,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c4a611e-9782-49f0-b6ed-9a4112c35533 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0bea34199cbba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   5133fe3ac577a       dashboard-metrics-scraper-86c6bf9756-cgv7x
	e43a60877ae82       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   824c73c6b9c00       kubernetes-dashboard-7779f9b69b-wpmmw
	73610d1234ca3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   e78639386211b       storage-provisioner
	9327a12617e4b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   780b65255a76e       coredns-668d6bf9bc-9plpt
	aa321fb842b90       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   ecfb7df30bb93       coredns-668d6bf9bc-cttbf
	4d9e93cf9670f       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   291c79ba96add       kube-proxy-777hh
	ec0d63f98dedd       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   7198b72e1baee       kube-scheduler-no-preload-472479
	9a0e35b15427e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   28dce7fe48151       kube-controller-manager-no-preload-472479
	da7c9e66cd289       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   cb2f5ebd30c1b       kube-apiserver-no-preload-472479
	6971eec3af1b0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   c6db1e93d5dc9       etcd-no-preload-472479
	c8d09d18cf8e3       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           25 minutes ago      Exited              kube-apiserver              1                   338062bcba011       kube-apiserver-no-preload-472479
	
	
	==> coredns [9327a12617e4b3540c1f9b26c01d617308d5b9afd5e3fbae1df01a56d989664e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [aa321fb842b90367df90d9189021142aa9ff4a0c8435d84f4a7336a7c1460f6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-472479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-472479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=no-preload-472479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_37_30_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:37:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-472479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:58:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:58:04 +0000   Mon, 27 Jan 2025 12:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:58:04 +0000   Mon, 27 Jan 2025 12:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:58:04 +0000   Mon, 27 Jan 2025 12:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:58:04 +0000   Mon, 27 Jan 2025 12:37:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.27
	  Hostname:    no-preload-472479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 920a27f702a84964b30f4f1de061649b
	  System UUID:                920a27f7-02a8-4964-b30f-4f1de061649b
	  Boot ID:                    a1160603-c962-4d8f-b8f8-1065ade8821e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9plpt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-cttbf                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-472479                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-472479              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-472479     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-777hh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-472479              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-sh4m7                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-cgv7x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wpmmw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-472479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-472479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-472479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node no-preload-472479 event: Registered Node no-preload-472479 in Controller
	
	
	==> dmesg <==
	[  +0.038004] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.986368] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.035104] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.558364] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.445265] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.064641] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057284] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.175573] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.138142] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.271366] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[ +16.433756] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.058975] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.824199] systemd-fstab-generator[1452]: Ignoring "noauto" option for root device
	[  +4.611802] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.349054] kauditd_printk_skb: 85 callbacks suppressed
	[Jan27 12:37] systemd-fstab-generator[3225]: Ignoring "noauto" option for root device
	[  +0.059011] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.016011] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +0.140630] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.296955] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.092643] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.496256] kauditd_printk_skb: 110 callbacks suppressed
	[  +7.526492] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [6971eec3af1b0f4f887fbee3100901645ede2ab615d15076831747eecb084228] <==
	{"level":"info","ts":"2025-01-27T12:47:34.201985Z","caller":"traceutil/trace.go:171","msg":"trace[1539652833] linearizableReadLoop","detail":"{readStateIndex:1244; appliedIndex:1243; }","duration":"250.747945ms","start":"2025-01-27T12:47:33.951214Z","end":"2025-01-27T12:47:34.201962Z","steps":["trace[1539652833] 'read index received'  (duration: 24.431µs)","trace[1539652833] 'applied index is now lower than readState.Index'  (duration: 250.722082ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:47:34.202132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.872864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:47:34.202198Z","caller":"traceutil/trace.go:171","msg":"trace[956143683] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1109; }","duration":"250.991834ms","start":"2025-01-27T12:47:33.951189Z","end":"2025-01-27T12:47:34.202181Z","steps":["trace[956143683] 'agreement among raft nodes before linearized reading'  (duration: 250.868568ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:47:34.890164Z","caller":"traceutil/trace.go:171","msg":"trace[1340204034] linearizableReadLoop","detail":"{readStateIndex:1245; appliedIndex:1244; }","duration":"139.235682ms","start":"2025-01-27T12:47:34.750911Z","end":"2025-01-27T12:47:34.890146Z","steps":["trace[1340204034] 'read index received'  (duration: 139.037995ms)","trace[1340204034] 'applied index is now lower than readState.Index'  (duration: 197.152µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:47:34.890562Z","caller":"traceutil/trace.go:171","msg":"trace[1831934303] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"161.681957ms","start":"2025-01-27T12:47:34.728865Z","end":"2025-01-27T12:47:34.890547Z","steps":["trace[1831934303] 'process raft request'  (duration: 161.129483ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:47:34.890900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.99371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:47:34.890958Z","caller":"traceutil/trace.go:171","msg":"trace[178325197] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"140.084695ms","start":"2025-01-27T12:47:34.750866Z","end":"2025-01-27T12:47:34.890951Z","steps":["trace[178325197] 'agreement among raft nodes before linearized reading'  (duration: 139.985729ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:47:34.891100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.72351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-27T12:47:34.891486Z","caller":"traceutil/trace.go:171","msg":"trace[481939537] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1110; }","duration":"125.126562ms","start":"2025-01-27T12:47:34.766339Z","end":"2025-01-27T12:47:34.891466Z","steps":["trace[481939537] 'agreement among raft nodes before linearized reading'  (duration: 124.729666ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:47:35.156741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.064674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:47:35.157150Z","caller":"traceutil/trace.go:171","msg":"trace[1045672610] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"206.545463ms","start":"2025-01-27T12:47:34.950577Z","end":"2025-01-27T12:47:35.157123Z","steps":["trace[1045672610] 'range keys from in-memory index tree'  (duration: 205.990381ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:49:13.684683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.694024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:49:13.684761Z","caller":"traceutil/trace.go:171","msg":"trace[790546707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1200; }","duration":"206.932722ms","start":"2025-01-27T12:49:13.477812Z","end":"2025-01-27T12:49:13.684745Z","steps":["trace[790546707] 'range keys from in-memory index tree'  (duration: 206.520915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:49:13.684800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.846537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:49:13.684842Z","caller":"traceutil/trace.go:171","msg":"trace[1009791291] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1200; }","duration":"138.925632ms","start":"2025-01-27T12:49:13.545907Z","end":"2025-01-27T12:49:13.684832Z","steps":["trace[1009791291] 'range keys from in-memory index tree'  (duration: 138.748945ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:49:14.332724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.777143ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2884156059650492203 > lease_revoke:<id:280694a7c442068b>","response":"size:29"}
	{"level":"info","ts":"2025-01-27T12:49:14.332846Z","caller":"traceutil/trace.go:171","msg":"trace[1628809903] linearizableReadLoop","detail":"{readStateIndex:1357; appliedIndex:1356; }","duration":"187.387198ms","start":"2025-01-27T12:49:14.145438Z","end":"2025-01-27T12:49:14.332825Z","steps":["trace[1628809903] 'read index received'  (duration: 24.833µs)","trace[1628809903] 'applied index is now lower than readState.Index'  (duration: 187.361088ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:49:14.332957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.504983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:49:14.333024Z","caller":"traceutil/trace.go:171","msg":"trace[1672212998] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1200; }","duration":"187.601188ms","start":"2025-01-27T12:49:14.145414Z","end":"2025-01-27T12:49:14.333015Z","steps":["trace[1672212998] 'agreement among raft nodes before linearized reading'  (duration: 187.492873ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:52:25.538315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2025-01-27T12:52:25.543967Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1102,"took":"4.843402ms","hash":1928661266,"current-db-size-bytes":2772992,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1712128,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T12:52:25.544097Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1928661266,"revision":1102,"compact-revision":850}
	{"level":"info","ts":"2025-01-27T12:57:25.544902Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1352}
	{"level":"info","ts":"2025-01-27T12:57:25.549822Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1352,"took":"4.456247ms","hash":4246154407,"current-db-size-bytes":2772992,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1748992,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T12:57:25.549891Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4246154407,"revision":1352,"compact-revision":1102}
	
	
	==> kernel <==
	 12:58:41 up 26 min,  0 users,  load average: 1.31, 0.61, 0.35
	Linux no-preload-472479 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c8d09d18cf8e33807df00f261b97638875851b35f131a920d6d5b0a25625b152] <==
	W0127 12:37:20.335492       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.390115       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.414040       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.440551       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.494274       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.523397       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.559743       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.573418       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.718911       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.736662       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.752362       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.776115       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.792898       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.827315       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.875349       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.903129       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.926169       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.934171       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.965267       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:20.968020       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:21.096288       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:21.123064       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:21.202345       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:21.306002       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:21.441130       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [da7c9e66cd289c57842d9582c62ce038a7afb3d3cdec6d45774c8106516d72e7] <==
	I0127 12:55:27.875981       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:55:27.876034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:57:26.873312       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:57:26.873513       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:57:27.875662       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:57:27.875853       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:57:27.876002       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:57:27.876108       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:57:27.877026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:57:27.878224       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:58:27.878131       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 12:58:27.878371       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:58:27.878505       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0127 12:58:27.878574       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:58:27.880415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:58:27.880422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9a0e35b15427e66982e212309633d7d512497152c5625f7c0a4606193133e948] <==
	I0127 12:53:52.620852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="79.246µs"
	I0127 12:53:55.704739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="67.364µs"
	I0127 12:53:59.121215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="58.995µs"
	E0127 12:54:03.631808       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:03.690001       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:54:08.695558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="71.569µs"
	E0127 12:54:33.638318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:33.696548       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:03.644709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:03.702913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:33.651836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:33.710501       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:03.658057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:03.717676       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:33.663429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:33.725259       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:57:03.669034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:03.732779       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:57:33.676161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:33.740408       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:58:03.685463       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:58:03.747053       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:58:04.305345       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-472479"
	E0127 12:58:33.690986       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:58:33.753782       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4d9e93cf9670fe8752d19b5a86bed7774e1f2bd468dc1410ca2de8dbf187209a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:37:35.458381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:37:35.506424       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.27"]
	E0127 12:37:35.506585       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:37:35.624930       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:37:35.624975       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:37:35.625002       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:37:35.627480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:37:35.627935       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:37:35.628069       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:37:35.629937       1 config.go:199] "Starting service config controller"
	I0127 12:37:35.630016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:37:35.630070       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:37:35.630075       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:37:35.630692       1 config.go:329] "Starting node config controller"
	I0127 12:37:35.639395       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:37:35.730206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:37:35.730251       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:37:35.741980       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ec0d63f98dedd7f1e539fdc114ad937eb10508b8ff68238e2b8c62c46ee8c851] <==
	W0127 12:37:27.708202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0127 12:37:27.708344       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:37:27.708394       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0127 12:37:27.708402       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:27.787308       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:37:27.787351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:27.805547       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:37:27.805798       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:27.819376       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:37:27.819484       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:27.873422       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:37:27.873695       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:27.939991       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:37:27.940080       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:37:27.973420       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:37:27.973536       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:28.022155       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:37:28.022239       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:28.079708       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:37:28.079770       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:28.173431       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:37:28.173522       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:37:28.193147       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:37:28.193202       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:37:29.773234       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:57:59 no-preload-472479 kubelet[3564]: E0127 12:57:59.680709    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-cgv7x_kubernetes-dashboard(3f52f406-9235-4bc3-86e2-46436d7d5fae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-cgv7x" podUID="3f52f406-9235-4bc3-86e2-46436d7d5fae"
	Jan 27 12:58:00 no-preload-472479 kubelet[3564]: E0127 12:58:00.034472    3564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982680033778122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:00 no-preload-472479 kubelet[3564]: E0127 12:58:00.034512    3564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982680033778122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:04 no-preload-472479 kubelet[3564]: E0127 12:58:04.680178    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-sh4m7" podUID="7a889d9f-c677-4338-a846-7067b568b6ca"
	Jan 27 12:58:10 no-preload-472479 kubelet[3564]: E0127 12:58:10.036135    3564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982690035857766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:10 no-preload-472479 kubelet[3564]: E0127 12:58:10.036174    3564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982690035857766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:10 no-preload-472479 kubelet[3564]: I0127 12:58:10.679235    3564 scope.go:117] "RemoveContainer" containerID="0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f"
	Jan 27 12:58:10 no-preload-472479 kubelet[3564]: E0127 12:58:10.679492    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-cgv7x_kubernetes-dashboard(3f52f406-9235-4bc3-86e2-46436d7d5fae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-cgv7x" podUID="3f52f406-9235-4bc3-86e2-46436d7d5fae"
	Jan 27 12:58:15 no-preload-472479 kubelet[3564]: E0127 12:58:15.680932    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-sh4m7" podUID="7a889d9f-c677-4338-a846-7067b568b6ca"
	Jan 27 12:58:20 no-preload-472479 kubelet[3564]: E0127 12:58:20.037553    3564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982700037332809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:20 no-preload-472479 kubelet[3564]: E0127 12:58:20.037591    3564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982700037332809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:22 no-preload-472479 kubelet[3564]: I0127 12:58:22.679278    3564 scope.go:117] "RemoveContainer" containerID="0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f"
	Jan 27 12:58:22 no-preload-472479 kubelet[3564]: E0127 12:58:22.679784    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-cgv7x_kubernetes-dashboard(3f52f406-9235-4bc3-86e2-46436d7d5fae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-cgv7x" podUID="3f52f406-9235-4bc3-86e2-46436d7d5fae"
	Jan 27 12:58:29 no-preload-472479 kubelet[3564]: E0127 12:58:29.707231    3564 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:58:29 no-preload-472479 kubelet[3564]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:58:29 no-preload-472479 kubelet[3564]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:58:29 no-preload-472479 kubelet[3564]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:58:29 no-preload-472479 kubelet[3564]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:58:30 no-preload-472479 kubelet[3564]: E0127 12:58:30.041425    3564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982710039466495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:30 no-preload-472479 kubelet[3564]: E0127 12:58:30.041472    3564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982710039466495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:30 no-preload-472479 kubelet[3564]: E0127 12:58:30.680801    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-sh4m7" podUID="7a889d9f-c677-4338-a846-7067b568b6ca"
	Jan 27 12:58:33 no-preload-472479 kubelet[3564]: I0127 12:58:33.678991    3564 scope.go:117] "RemoveContainer" containerID="0bea34199cbbadb247c932c17bb99ba4a629d0561ef9b60695ee1a8a6e25cf0f"
	Jan 27 12:58:33 no-preload-472479 kubelet[3564]: E0127 12:58:33.679153    3564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-cgv7x_kubernetes-dashboard(3f52f406-9235-4bc3-86e2-46436d7d5fae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-cgv7x" podUID="3f52f406-9235-4bc3-86e2-46436d7d5fae"
	Jan 27 12:58:40 no-preload-472479 kubelet[3564]: E0127 12:58:40.043904    3564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720043473856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:58:40 no-preload-472479 kubelet[3564]: E0127 12:58:40.044278    3564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982720043473856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [e43a60877ae821e20ac4fe8fe0ba85e6a3e3f7a4d83f1932348b6e05e91f939e] <==
	2025/01/27 12:46:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:58:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [73610d1234ca3435323d37a384c78ec594ef79a665d353d70eea9d0cf7c191c3] <==
	I0127 12:37:37.221659       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:37:37.252077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:37:37.252154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:37:37.265648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:37:37.266080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-472479_45e35875-f845-4579-9a03-a4e501a0667f!
	I0127 12:37:37.267201       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"192abc06-495f-4aa3-b964-4865b47c84a7", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-472479_45e35875-f845-4579-9a03-a4e501a0667f became leader
	I0127 12:37:37.367575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-472479_45e35875-f845-4579-9a03-a4e501a0667f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-472479 -n no-preload-472479
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-472479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-sh4m7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-472479 describe pod metrics-server-f79f97bbb-sh4m7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-472479 describe pod metrics-server-f79f97bbb-sh4m7: exit status 1 (61.214009ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-sh4m7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-472479 describe pod metrics-server-f79f97bbb-sh4m7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1606.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1640.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-485564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-485564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (27m18.713205712s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-485564] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-485564" primary control-plane node in "default-k8s-diff-port-485564" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-485564" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-485564 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:32:45.890262 1775128 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:32:45.890403 1775128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:32:45.890414 1775128 out.go:358] Setting ErrFile to fd 2...
	I0127 12:32:45.890419 1775128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:32:45.890618 1775128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:32:45.891219 1775128 out.go:352] Setting JSON to false
	I0127 12:32:45.892245 1775128 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33307,"bootTime":1737947859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:32:45.892344 1775128 start.go:139] virtualization: kvm guest
	I0127 12:32:45.894329 1775128 out.go:177] * [default-k8s-diff-port-485564] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:32:45.895541 1775128 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:32:45.895563 1775128 notify.go:220] Checking for updates...
	I0127 12:32:45.897790 1775128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:32:45.899363 1775128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:32:45.900318 1775128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:32:45.901193 1775128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:32:45.902164 1775128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:32:45.903885 1775128 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:32:45.904520 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:32:45.904602 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:32:45.920163 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0127 12:32:45.920583 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:32:45.921114 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:32:45.921134 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:32:45.921472 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:32:45.921680 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:32:45.921923 1775128 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:32:45.922247 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:32:45.922302 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:32:45.936494 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0127 12:32:45.936875 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:32:45.937418 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:32:45.937441 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:32:45.937780 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:32:45.937983 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:32:45.973582 1775128 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:32:45.974605 1775128 start.go:297] selected driver: kvm2
	I0127 12:32:45.974619 1775128 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-485564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-485564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:32:45.974773 1775128 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:32:45.975801 1775128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:32:45.975901 1775128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:32:45.992296 1775128 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:32:45.992681 1775128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:32:45.992713 1775128 cni.go:84] Creating CNI manager for ""
	I0127 12:32:45.992755 1775128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:32:45.992790 1775128 start.go:340] cluster config:
	{Name:default-k8s-diff-port-485564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-485564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:32:45.992894 1775128 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:32:45.994353 1775128 out.go:177] * Starting "default-k8s-diff-port-485564" primary control-plane node in "default-k8s-diff-port-485564" cluster
	I0127 12:32:45.995409 1775128 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:32:45.995449 1775128 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:32:45.995459 1775128 cache.go:56] Caching tarball of preloaded images
	I0127 12:32:45.995591 1775128 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:32:45.995606 1775128 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:32:45.995751 1775128 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/config.json ...
	I0127 12:32:45.995989 1775128 start.go:360] acquireMachinesLock for default-k8s-diff-port-485564: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:32:45.996044 1775128 start.go:364] duration metric: took 33.317µs to acquireMachinesLock for "default-k8s-diff-port-485564"
	I0127 12:32:45.996065 1775128 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:32:45.996074 1775128 fix.go:54] fixHost starting: 
	I0127 12:32:45.996426 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:32:45.996464 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:32:46.011249 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0127 12:32:46.011713 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:32:46.012251 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:32:46.012276 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:32:46.012567 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:32:46.012768 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:32:46.012950 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:32:46.014516 1775128 fix.go:112] recreateIfNeeded on default-k8s-diff-port-485564: state=Stopped err=<nil>
	I0127 12:32:46.014550 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	W0127 12:32:46.014710 1775128 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:32:46.016191 1775128 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-485564" ...
	I0127 12:32:46.017113 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Start
	I0127 12:32:46.017285 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) starting domain...
	I0127 12:32:46.017307 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) ensuring networks are active...
	I0127 12:32:46.017925 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Ensuring network default is active
	I0127 12:32:46.018207 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Ensuring network mk-default-k8s-diff-port-485564 is active
	I0127 12:32:46.018500 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) getting domain XML...
	I0127 12:32:46.019204 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) creating domain...
	I0127 12:32:47.292441 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) waiting for IP...
	I0127 12:32:47.293516 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.293939 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.294029 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:47.293934 1775162 retry.go:31] will retry after 240.568718ms: waiting for domain to come up
	I0127 12:32:47.536766 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.537335 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.537402 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:47.537306 1775162 retry.go:31] will retry after 373.959624ms: waiting for domain to come up
	I0127 12:32:47.913037 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.913532 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:47.913584 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:47.913460 1775162 retry.go:31] will retry after 480.222992ms: waiting for domain to come up
	I0127 12:32:48.395097 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:48.395681 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:48.395712 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:48.395639 1775162 retry.go:31] will retry after 485.328377ms: waiting for domain to come up
	I0127 12:32:48.882173 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:48.882766 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:48.882794 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:48.882713 1775162 retry.go:31] will retry after 564.407052ms: waiting for domain to come up
	I0127 12:32:49.448343 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:49.448948 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:49.448980 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:49.448896 1775162 retry.go:31] will retry after 776.050848ms: waiting for domain to come up
	I0127 12:32:50.226665 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:50.227201 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:50.227229 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:50.227172 1775162 retry.go:31] will retry after 818.72083ms: waiting for domain to come up
	I0127 12:32:51.047578 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:51.048092 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:51.048120 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:51.048052 1775162 retry.go:31] will retry after 1.447099703s: waiting for domain to come up
	I0127 12:32:52.496524 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:52.496991 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:52.497030 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:52.496979 1775162 retry.go:31] will retry after 1.204085373s: waiting for domain to come up
	I0127 12:32:53.703328 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:53.703759 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:53.703782 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:53.703734 1775162 retry.go:31] will retry after 1.668624003s: waiting for domain to come up
	I0127 12:32:55.374519 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:55.375035 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:55.375068 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:55.375000 1775162 retry.go:31] will retry after 2.404239207s: waiting for domain to come up
	I0127 12:32:57.780703 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:32:57.781323 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:32:57.781380 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:32:57.781303 1775162 retry.go:31] will retry after 3.236468223s: waiting for domain to come up
	I0127 12:33:01.019293 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:01.019750 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | unable to find current IP address of domain default-k8s-diff-port-485564 in network mk-default-k8s-diff-port-485564
	I0127 12:33:01.019774 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | I0127 12:33:01.019717 1775162 retry.go:31] will retry after 3.412941658s: waiting for domain to come up
	I0127 12:33:04.435862 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.436335 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has current primary IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.436358 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) found domain IP: 192.168.61.190
	I0127 12:33:04.436373 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) reserving static IP address...
	I0127 12:33:04.436876 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-485564", mac: "52:54:00:60:56:40", ip: "192.168.61.190"} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.436915 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) reserved static IP address 192.168.61.190 for domain default-k8s-diff-port-485564
	I0127 12:33:04.436944 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | skip adding static IP to network mk-default-k8s-diff-port-485564 - found existing host DHCP lease matching {name: "default-k8s-diff-port-485564", mac: "52:54:00:60:56:40", ip: "192.168.61.190"}
	I0127 12:33:04.436964 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Getting to WaitForSSH function...
	I0127 12:33:04.436981 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) waiting for SSH...
	I0127 12:33:04.438973 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.439340 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.439385 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.439469 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Using SSH client type: external
	I0127 12:33:04.439498 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa (-rw-------)
	I0127 12:33:04.439529 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:33:04.439549 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | About to run SSH command:
	I0127 12:33:04.439558 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | exit 0
	I0127 12:33:04.566342 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | SSH cmd err, output: <nil>: 
	I0127 12:33:04.566681 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetConfigRaw
	I0127 12:33:04.567414 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetIP
	I0127 12:33:04.570136 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.570520 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.570552 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.570799 1775128 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/config.json ...
	I0127 12:33:04.571029 1775128 machine.go:93] provisionDockerMachine start ...
	I0127 12:33:04.571050 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:04.571281 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:04.573611 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.573918 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.573954 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.574146 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:04.574304 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.574451 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.574574 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:04.574731 1775128 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:04.574927 1775128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 12:33:04.574939 1775128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:33:04.686732 1775128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:33:04.686789 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetMachineName
	I0127 12:33:04.687111 1775128 buildroot.go:166] provisioning hostname "default-k8s-diff-port-485564"
	I0127 12:33:04.687143 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetMachineName
	I0127 12:33:04.687418 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:04.690226 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.690677 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.690706 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.690890 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:04.691121 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.691314 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.691450 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:04.691629 1775128 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:04.691815 1775128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 12:33:04.691833 1775128 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-485564 && echo "default-k8s-diff-port-485564" | sudo tee /etc/hostname
	I0127 12:33:04.816309 1775128 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485564
	
	I0127 12:33:04.816342 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:04.819679 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.820043 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.820089 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.820240 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:04.820446 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.820651 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:04.820824 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:04.821030 1775128 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:04.821206 1775128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 12:33:04.821223 1775128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-485564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-485564/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-485564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:33:04.942553 1775128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:33:04.942588 1775128 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:33:04.942658 1775128 buildroot.go:174] setting up certificates
	I0127 12:33:04.942678 1775128 provision.go:84] configureAuth start
	I0127 12:33:04.942698 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetMachineName
	I0127 12:33:04.943009 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetIP
	I0127 12:33:04.945426 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.945710 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.945734 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.945842 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:04.948118 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.948440 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:04.948470 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:04.948599 1775128 provision.go:143] copyHostCerts
	I0127 12:33:04.948680 1775128 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:33:04.948700 1775128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:33:04.948760 1775128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:33:04.948863 1775128 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:33:04.948871 1775128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:33:04.948896 1775128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:33:04.948959 1775128 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:33:04.948967 1775128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:33:04.948990 1775128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:33:04.949049 1775128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-485564 san=[127.0.0.1 192.168.61.190 default-k8s-diff-port-485564 localhost minikube]
	I0127 12:33:05.293088 1775128 provision.go:177] copyRemoteCerts
	I0127 12:33:05.293147 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:33:05.293171 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.295790 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.296147 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.296167 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.296378 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.296601 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.296784 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.296941 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:33:05.388285 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:33:05.410129 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 12:33:05.430683 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:33:05.451575 1775128 provision.go:87] duration metric: took 508.880024ms to configureAuth
	I0127 12:33:05.451598 1775128 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:33:05.451823 1775128 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:33:05.451938 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.454511 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.454881 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.454916 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.455121 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.455325 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.455503 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.455649 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.455830 1775128 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:05.456007 1775128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 12:33:05.456029 1775128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:33:05.675016 1775128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:33:05.675044 1775128 machine.go:96] duration metric: took 1.104000649s to provisionDockerMachine
	I0127 12:33:05.675060 1775128 start.go:293] postStartSetup for "default-k8s-diff-port-485564" (driver="kvm2")
	I0127 12:33:05.675075 1775128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:33:05.675106 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:05.675453 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:33:05.675490 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.678204 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.678579 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.678610 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.678878 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.679074 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.679351 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.679523 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:33:05.764660 1775128 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:33:05.768659 1775128 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:33:05.768682 1775128 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:33:05.768746 1775128 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:33:05.768825 1775128 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:33:05.768932 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:33:05.777609 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:33:05.802478 1775128 start.go:296] duration metric: took 127.404543ms for postStartSetup
	I0127 12:33:05.802521 1775128 fix.go:56] duration metric: took 19.80644735s for fixHost
	I0127 12:33:05.802551 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.805272 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.805580 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.805609 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.805784 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.806014 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.806213 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.806370 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.806520 1775128 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:05.806677 1775128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 12:33:05.806687 1775128 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:33:05.918872 1775128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981185.895012846
	
	I0127 12:33:05.918907 1775128 fix.go:216] guest clock: 1737981185.895012846
	I0127 12:33:05.918919 1775128 fix.go:229] Guest: 2025-01-27 12:33:05.895012846 +0000 UTC Remote: 2025-01-27 12:33:05.802526868 +0000 UTC m=+19.954562385 (delta=92.485978ms)
	I0127 12:33:05.918976 1775128 fix.go:200] guest clock delta is within tolerance: 92.485978ms
	I0127 12:33:05.918989 1775128 start.go:83] releasing machines lock for "default-k8s-diff-port-485564", held for 19.922930589s
	I0127 12:33:05.919022 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:05.919297 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetIP
	I0127 12:33:05.921608 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.921983 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.922015 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.922224 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:05.922718 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:05.922940 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:33:05.923039 1775128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:33:05.923097 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.923161 1775128 ssh_runner.go:195] Run: cat /version.json
	I0127 12:33:05.923189 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:33:05.925833 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.926085 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.926159 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.926180 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.926331 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.926554 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.926658 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:05.926687 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:05.926688 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.926837 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:33:05.926895 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:33:05.926997 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:33:05.927135 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:33:05.927273 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:33:06.053795 1775128 ssh_runner.go:195] Run: systemctl --version
	I0127 12:33:06.059410 1775128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:33:06.202608 1775128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:33:06.208444 1775128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:33:06.208511 1775128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:33:06.224028 1775128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:33:06.224050 1775128 start.go:495] detecting cgroup driver to use...
	I0127 12:33:06.224106 1775128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:33:06.239407 1775128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:33:06.252184 1775128 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:33:06.252225 1775128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:33:06.264942 1775128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:33:06.277502 1775128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:33:06.390169 1775128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:33:06.536359 1775128 docker.go:233] disabling docker service ...
	I0127 12:33:06.536447 1775128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:33:06.550049 1775128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:33:06.562192 1775128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:33:06.694510 1775128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:33:06.820448 1775128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:33:06.833598 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:33:06.851815 1775128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:33:06.851883 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.861704 1775128 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:33:06.861756 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.872593 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.881789 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.893722 1775128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:33:06.904229 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.914233 1775128 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.929478 1775128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:06.939057 1775128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:33:06.947835 1775128 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:33:06.947884 1775128 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:33:06.960703 1775128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:33:06.969418 1775128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:33:07.109082 1775128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:33:07.205301 1775128 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:33:07.205375 1775128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:33:07.210126 1775128 start.go:563] Will wait 60s for crictl version
	I0127 12:33:07.210189 1775128 ssh_runner.go:195] Run: which crictl
	I0127 12:33:07.213671 1775128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:33:07.253782 1775128 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:33:07.253857 1775128 ssh_runner.go:195] Run: crio --version
	I0127 12:33:07.286147 1775128 ssh_runner.go:195] Run: crio --version
	I0127 12:33:07.313825 1775128 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:33:07.315272 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetIP
	I0127 12:33:07.318394 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:07.318765 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:33:07.318804 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:33:07.319191 1775128 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 12:33:07.323089 1775128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:33:07.335106 1775128 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-485564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-485
564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:33:07.335250 1775128 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:33:07.335305 1775128 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:33:07.368088 1775128 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:33:07.368182 1775128 ssh_runner.go:195] Run: which lz4
	I0127 12:33:07.372128 1775128 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:33:07.375954 1775128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:33:07.375986 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:33:08.559985 1775128 crio.go:462] duration metric: took 1.187895987s to copy over tarball
	I0127 12:33:08.560072 1775128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:33:10.649861 1775128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089750849s)
	I0127 12:33:10.649897 1775128 crio.go:469] duration metric: took 2.089877244s to extract the tarball
	I0127 12:33:10.649907 1775128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:33:10.689814 1775128 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:33:10.730689 1775128 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:33:10.730710 1775128 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:33:10.730717 1775128 kubeadm.go:934] updating node { 192.168.61.190 8444 v1.32.1 crio true true} ...
	I0127 12:33:10.730834 1775128 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-485564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-485564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:33:10.730900 1775128 ssh_runner.go:195] Run: crio config
	I0127 12:33:10.779922 1775128 cni.go:84] Creating CNI manager for ""
	I0127 12:33:10.779952 1775128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:33:10.779965 1775128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:33:10.779995 1775128 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.190 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-485564 NodeName:default-k8s-diff-port-485564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:33:10.780173 1775128 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-485564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.190"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.190"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:33:10.780248 1775128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:33:10.790935 1775128 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:33:10.791031 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:33:10.800781 1775128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0127 12:33:10.817515 1775128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:33:10.832483 1775128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 12:33:10.847670 1775128 ssh_runner.go:195] Run: grep 192.168.61.190	control-plane.minikube.internal$ /etc/hosts
	I0127 12:33:10.850914 1775128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:33:10.861361 1775128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:33:10.973531 1775128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:33:10.989804 1775128 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564 for IP: 192.168.61.190
	I0127 12:33:10.989835 1775128 certs.go:194] generating shared ca certs ...
	I0127 12:33:10.989853 1775128 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:33:10.990040 1775128 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:33:10.990093 1775128 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:33:10.990108 1775128 certs.go:256] generating profile certs ...
	I0127 12:33:10.990265 1775128 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/client.key
	I0127 12:33:10.990351 1775128 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/apiserver.key.8cd15cae
	I0127 12:33:10.990409 1775128 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/proxy-client.key
	I0127 12:33:10.990560 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:33:10.990594 1775128 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:33:10.990603 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:33:10.990646 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:33:10.990680 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:33:10.990707 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:33:10.990788 1775128 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:33:10.991456 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:33:11.038716 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:33:11.080275 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:33:11.113584 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:33:11.138176 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 12:33:11.161683 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:33:11.184824 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:33:11.208617 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/default-k8s-diff-port-485564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:33:11.231566 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:33:11.253888 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:33:11.277332 1775128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:33:11.300706 1775128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:33:11.319024 1775128 ssh_runner.go:195] Run: openssl version
	I0127 12:33:11.324955 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:33:11.335569 1775128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:33:11.339649 1775128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:33:11.339704 1775128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:33:11.345247 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:33:11.355089 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:33:11.365336 1775128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:33:11.369453 1775128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:33:11.369522 1775128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:33:11.374489 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:33:11.384252 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:33:11.394195 1775128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:11.398062 1775128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:11.398123 1775128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:11.403439 1775128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:33:11.413011 1775128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:33:11.417169 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:33:11.422868 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:33:11.428128 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:33:11.433386 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:33:11.438510 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:33:11.444240 1775128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:33:11.449619 1775128 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-485564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-485564
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:11.449742 1775128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:33:11.449790 1775128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:33:11.495245 1775128 cri.go:89] found id: ""
	I0127 12:33:11.495319 1775128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:33:11.507349 1775128 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:33:11.507370 1775128 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:33:11.507420 1775128 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:33:11.517495 1775128 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:33:11.518318 1775128 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-485564" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:33:11.518819 1775128 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-1724227/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-485564" cluster setting kubeconfig missing "default-k8s-diff-port-485564" context setting]
	I0127 12:33:11.519735 1775128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:33:11.521408 1775128 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:33:11.530902 1775128 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.190
	I0127 12:33:11.530940 1775128 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:33:11.530956 1775128 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 12:33:11.531006 1775128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:33:11.573818 1775128 cri.go:89] found id: ""
	I0127 12:33:11.573924 1775128 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:33:11.590555 1775128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:33:11.599916 1775128 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:33:11.599942 1775128 kubeadm.go:157] found existing configuration files:
	
	I0127 12:33:11.600000 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:33:11.610059 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:33:11.610126 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:33:11.619865 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:33:11.628388 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:33:11.628450 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:33:11.637679 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:33:11.646268 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:33:11.646319 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:33:11.655059 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:33:11.664506 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:33:11.664569 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:33:11.673305 1775128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:33:11.682274 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:11.795347 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:12.916661 1775128 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.121276886s)
	I0127 12:33:12.916692 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:13.108797 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:13.164942 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:13.216088 1775128 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:33:13.216215 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:33:13.716555 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:33:14.216871 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:33:14.717310 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:33:15.216681 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:33:15.236729 1775128 api_server.go:72] duration metric: took 2.020625356s to wait for apiserver process to appear ...
	I0127 12:33:15.236765 1775128 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:33:15.236790 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:17.988444 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:33:17.988476 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:33:17.988492 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:18.031630 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:33:18.031658 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:33:18.236952 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:18.244888 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:18.244914 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:18.737684 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:18.742831 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:18.742860 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:19.237602 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:19.244535 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:19.244562 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:19.737172 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:19.743459 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:19.743495 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:20.237146 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:20.242351 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:20.242377 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:20.736927 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:20.741589 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:33:20.741619 1775128 api_server.go:103] status: https://192.168.61.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:33:21.237716 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:33:21.244115 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 200:
	ok
	I0127 12:33:21.251349 1775128 api_server.go:141] control plane version: v1.32.1
	I0127 12:33:21.251377 1775128 api_server.go:131] duration metric: took 6.014603864s to wait for apiserver health ...
	I0127 12:33:21.251389 1775128 cni.go:84] Creating CNI manager for ""
	I0127 12:33:21.251397 1775128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:33:21.252847 1775128 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:33:21.254047 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:33:21.263752 1775128 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:33:21.280644 1775128 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:33:21.290380 1775128 system_pods.go:59] 8 kube-system pods found
	I0127 12:33:21.290431 1775128 system_pods.go:61] "coredns-668d6bf9bc-fdh7l" [281d3e0a-6882-400b-889a-8734b735c0e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:33:21.290446 1775128 system_pods.go:61] "etcd-default-k8s-diff-port-485564" [2f6a2787-5f0f-43d1-b1b8-5245754d26eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:33:21.290460 1775128 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-485564" [4d24f59b-8423-4e8f-a5b7-3b33b5118142] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:33:21.290477 1775128 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-485564" [5de02681-4f12-4810-910f-b3954e7d341b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:33:21.290487 1775128 system_pods.go:61] "kube-proxy-mmlnn" [d71e3238-6ce0-4c81-885a-4d57d9ed8b0d] Running
	I0127 12:33:21.290496 1775128 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-485564" [d27c2d55-5442-430b-8a47-535da3c38600] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:33:21.290504 1775128 system_pods.go:61] "metrics-server-f79f97bbb-4twjq" [06c4582f-c6bd-4a4a-80a8-b628011b907a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:33:21.290514 1775128 system_pods.go:61] "storage-provisioner" [ca65446f-99c8-4266-a442-ef3dafbc304a] Running
	I0127 12:33:21.290524 1775128 system_pods.go:74] duration metric: took 9.859049ms to wait for pod list to return data ...
	I0127 12:33:21.290536 1775128 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:33:21.293384 1775128 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:33:21.293422 1775128 node_conditions.go:123] node cpu capacity is 2
	I0127 12:33:21.293434 1775128 node_conditions.go:105] duration metric: took 2.891676ms to run NodePressure ...
	I0127 12:33:21.293455 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:21.550259 1775128 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:33:21.554295 1775128 kubeadm.go:739] kubelet initialised
	I0127 12:33:21.554321 1775128 kubeadm.go:740] duration metric: took 4.03927ms waiting for restarted kubelet to initialise ...
	I0127 12:33:21.554333 1775128 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:33:21.558555 1775128 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-fdh7l" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:21.562676 1775128 pod_ready.go:98] node "default-k8s-diff-port-485564" hosting pod "coredns-668d6bf9bc-fdh7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.562697 1775128 pod_ready.go:82] duration metric: took 4.120907ms for pod "coredns-668d6bf9bc-fdh7l" in "kube-system" namespace to be "Ready" ...
	E0127 12:33:21.562706 1775128 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-485564" hosting pod "coredns-668d6bf9bc-fdh7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.562738 1775128 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:21.566897 1775128 pod_ready.go:98] node "default-k8s-diff-port-485564" hosting pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.566916 1775128 pod_ready.go:82] duration metric: took 4.142293ms for pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	E0127 12:33:21.566926 1775128 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-485564" hosting pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.566932 1775128 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:21.571248 1775128 pod_ready.go:98] node "default-k8s-diff-port-485564" hosting pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.571266 1775128 pod_ready.go:82] duration metric: took 4.326058ms for pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	E0127 12:33:21.571275 1775128 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-485564" hosting pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.571281 1775128 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:21.684322 1775128 pod_ready.go:98] node "default-k8s-diff-port-485564" hosting pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.684353 1775128 pod_ready.go:82] duration metric: took 113.06308ms for pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	E0127 12:33:21.684364 1775128 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-485564" hosting pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-485564" has status "Ready":"False"
	I0127 12:33:21.684372 1775128 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mmlnn" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:22.084269 1775128 pod_ready.go:93] pod "kube-proxy-mmlnn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:33:22.084300 1775128 pod_ready.go:82] duration metric: took 399.919623ms for pod "kube-proxy-mmlnn" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:22.084315 1775128 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:24.089426 1775128 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:26.090351 1775128 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:28.090414 1775128 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:29.091968 1775128 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"True"
	I0127 12:33:29.091996 1775128 pod_ready.go:82] duration metric: took 7.007671629s for pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:29.092010 1775128 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace to be "Ready" ...
	I0127 12:33:31.098653 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:33.598210 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:35.598281 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:38.099743 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:40.099815 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:42.598727 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:45.099160 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:47.599517 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:50.099447 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:52.100002 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:54.100459 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:56.600304 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:59.762924 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:02.097746 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:04.098071 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:06.598163 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:08.598529 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:11.098550 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:13.597616 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:15.597694 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:17.599221 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:20.099466 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:22.598598 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:25.098067 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:27.098508 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:29.099284 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:31.598368 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:34.098716 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:36.598557 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:39.097626 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:41.100311 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:43.598087 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:46.101076 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:48.598281 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:50.599576 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:53.098358 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:55.598941 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:58.099660 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:00.597549 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:02.598501 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:04.598889 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:07.098976 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:09.099091 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:11.100186 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:13.597943 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:15.598156 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:18.098370 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:20.597436 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:22.598516 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:24.598945 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:26.599330 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:29.098892 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:31.600017 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:34.097714 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:36.097788 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:38.098499 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:40.598757 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.098525 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:45.597953 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.097982 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:50.099360 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:52.597705 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:54.597877 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:57.099613 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:59.598672 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:02.097976 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:04.098608 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:06.598326 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:08.598591 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:11.102335 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:13.598415 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:16.099416 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:18.599904 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:21.097950 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:23.598010 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:25.599006 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.599176 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:30.097616 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:32.098789 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.598490 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:36.599014 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:39.097594 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:41.097925 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:43.099434 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:45.597710 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:47.598232 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:50.098004 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:52.098617 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:54.098698 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:56.598565 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:58.599902 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:01.098737 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:03.600093 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:06.098978 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:08.598424 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:10.599432 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:13.097885 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:15.597554 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:17.598737 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:19.600716 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:22.099901 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:24.600393 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:27.181116 1775128 pod_ready.go:103] pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace has status "Ready":"False"
	I0127 12:37:29.092253 1775128 pod_ready.go:82] duration metric: took 4m0.000221886s for pod "metrics-server-f79f97bbb-4twjq" in "kube-system" namespace to be "Ready" ...
	E0127 12:37:29.092292 1775128 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 12:37:29.092340 1775128 pod_ready.go:39] duration metric: took 4m7.53798984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:37:29.092379 1775128 kubeadm.go:597] duration metric: took 4m17.58500246s to restartPrimaryControlPlane
	W0127 12:37:29.092463 1775128 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:37:29.092518 1775128 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:37:56.834397 1775128 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.741842326s)
	I0127 12:37:56.834500 1775128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:37:56.853478 1775128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:37:56.864858 1775128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:37:56.877929 1775128 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:37:56.877957 1775128 kubeadm.go:157] found existing configuration files:
	
	I0127 12:37:56.878013 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:37:56.887068 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:37:56.887148 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:37:56.897738 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:37:56.906589 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:37:56.906651 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:37:56.915616 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:37:56.924092 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:37:56.924156 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:37:56.933715 1775128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:37:56.942412 1775128 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:37:56.942474 1775128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:37:56.951491 1775128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:37:57.003713 1775128 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:37:57.003797 1775128 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:37:57.129461 1775128 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:37:57.129601 1775128 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:37:57.129748 1775128 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:37:57.140822 1775128 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:37:57.142825 1775128 out.go:235]   - Generating certificates and keys ...
	I0127 12:37:57.142951 1775128 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:37:57.143044 1775128 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:37:57.143162 1775128 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:37:57.143278 1775128 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:37:57.143399 1775128 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:37:57.143484 1775128 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:37:57.148233 1775128 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:37:57.148354 1775128 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:37:57.148475 1775128 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:37:57.148614 1775128 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:37:57.148680 1775128 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:37:57.148778 1775128 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:37:57.276164 1775128 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:37:57.406927 1775128 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:37:57.744593 1775128 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:37:57.922277 1775128 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:37:58.162966 1775128 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:37:58.163536 1775128 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:37:58.165869 1775128 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:37:58.167671 1775128 out.go:235]   - Booting up control plane ...
	I0127 12:37:58.167792 1775128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:37:58.167890 1775128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:37:58.170334 1775128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:37:58.187677 1775128 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:37:58.196109 1775128 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:37:58.196161 1775128 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:37:58.331769 1775128 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:37:58.331962 1775128 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:37:59.334042 1775128 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002302249s
	I0127 12:37:59.334165 1775128 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:38:04.335821 1775128 kubeadm.go:310] [api-check] The API server is healthy after 5.001626114s
	I0127 12:38:04.347577 1775128 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:38:04.859763 1775128 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:38:04.880447 1775128 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:38:04.880705 1775128 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-485564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:38:04.891588 1775128 kubeadm.go:310] [bootstrap-token] Using token: 7wo4nb.gk386zayszu5fe5w
	I0127 12:38:04.892925 1775128 out.go:235]   - Configuring RBAC rules ...
	I0127 12:38:04.893073 1775128 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:38:04.899703 1775128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:38:04.905838 1775128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:38:04.908578 1775128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:38:04.913530 1775128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:38:04.916107 1775128 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:38:05.060165 1775128 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:38:05.495719 1775128 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:38:06.061566 1775128 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:38:06.062693 1775128 kubeadm.go:310] 
	I0127 12:38:06.062784 1775128 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:38:06.062799 1775128 kubeadm.go:310] 
	I0127 12:38:06.062871 1775128 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:38:06.062906 1775128 kubeadm.go:310] 
	I0127 12:38:06.062957 1775128 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:38:06.063049 1775128 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:38:06.063129 1775128 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:38:06.063158 1775128 kubeadm.go:310] 
	I0127 12:38:06.063287 1775128 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:38:06.063324 1775128 kubeadm.go:310] 
	I0127 12:38:06.063400 1775128 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:38:06.063414 1775128 kubeadm.go:310] 
	I0127 12:38:06.063494 1775128 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:38:06.063603 1775128 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:38:06.063727 1775128 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:38:06.063750 1775128 kubeadm.go:310] 
	I0127 12:38:06.063848 1775128 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:38:06.063960 1775128 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:38:06.063973 1775128 kubeadm.go:310] 
	I0127 12:38:06.064094 1775128 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 7wo4nb.gk386zayszu5fe5w \
	I0127 12:38:06.064249 1775128 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:38:06.064274 1775128 kubeadm.go:310] 	--control-plane 
	I0127 12:38:06.064280 1775128 kubeadm.go:310] 
	I0127 12:38:06.064406 1775128 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:38:06.064416 1775128 kubeadm.go:310] 
	I0127 12:38:06.064529 1775128 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 7wo4nb.gk386zayszu5fe5w \
	I0127 12:38:06.064674 1775128 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:38:06.065446 1775128 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:38:06.065512 1775128 cni.go:84] Creating CNI manager for ""
	I0127 12:38:06.065532 1775128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:38:06.066800 1775128 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:38:06.068037 1775128 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:38:06.079038 1775128 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:38:06.099365 1775128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:38:06.099490 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:06.099543 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-485564 minikube.k8s.io/updated_at=2025_01_27T12_38_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-485564 minikube.k8s.io/primary=true
	I0127 12:38:06.124313 1775128 ops.go:34] apiserver oom_adj: -16
	I0127 12:38:06.289845 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:06.790112 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:07.290014 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:07.790250 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:08.290761 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:08.790734 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:09.290847 1775128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:38:09.364395 1775128 kubeadm.go:1113] duration metric: took 3.264968098s to wait for elevateKubeSystemPrivileges
	I0127 12:38:09.364435 1775128 kubeadm.go:394] duration metric: took 4m57.914824842s to StartCluster
	I0127 12:38:09.364463 1775128 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:38:09.364549 1775128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:38:09.365661 1775128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:38:09.365943 1775128 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.190 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:38:09.366112 1775128 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:38:09.366209 1775128 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-485564"
	I0127 12:38:09.366213 1775128 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:38:09.366234 1775128 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-485564"
	W0127 12:38:09.366247 1775128 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:38:09.366260 1775128 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-485564"
	I0127 12:38:09.366285 1775128 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-485564"
	W0127 12:38:09.366296 1775128 addons.go:247] addon dashboard should already be in state true
	I0127 12:38:09.366300 1775128 host.go:66] Checking if "default-k8s-diff-port-485564" exists ...
	I0127 12:38:09.366237 1775128 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-485564"
	I0127 12:38:09.366336 1775128 host.go:66] Checking if "default-k8s-diff-port-485564" exists ...
	I0127 12:38:09.366257 1775128 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-485564"
	I0127 12:38:09.366379 1775128 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-485564"
	W0127 12:38:09.366392 1775128 addons.go:247] addon metrics-server should already be in state true
	I0127 12:38:09.366417 1775128 host.go:66] Checking if "default-k8s-diff-port-485564" exists ...
	I0127 12:38:09.366349 1775128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-485564"
	I0127 12:38:09.366693 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.366740 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.366775 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.366791 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.366776 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.366809 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.366826 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.366869 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.368572 1775128 out.go:177] * Verifying Kubernetes components...
	I0127 12:38:09.369840 1775128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:09.382734 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0127 12:38:09.382808 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45701
	I0127 12:38:09.383263 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43265
	I0127 12:38:09.383277 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.383364 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.383640 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.383943 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.383970 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.384006 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.384028 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.384069 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.384096 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.384372 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.384424 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.384444 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.384934 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.384972 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.384990 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.385031 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.385531 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.385573 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.385587 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I0127 12:38:09.386059 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.386603 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.386631 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.387039 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.387394 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:38:09.390900 1775128 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-485564"
	W0127 12:38:09.390922 1775128 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:38:09.390951 1775128 host.go:66] Checking if "default-k8s-diff-port-485564" exists ...
	I0127 12:38:09.391297 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.391338 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.405548 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0127 12:38:09.405777 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0127 12:38:09.405796 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0127 12:38:09.405956 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.406225 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.406245 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.406654 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.406668 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.406785 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.406817 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.407120 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.407233 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.407263 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.407284 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.407351 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:38:09.407467 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:38:09.407665 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.407872 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:38:09.409545 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:38:09.409834 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:38:09.410157 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:38:09.411431 1775128 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:38:09.411434 1775128 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:38:09.411433 1775128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:38:09.412352 1775128 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:38:09.412369 1775128 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:38:09.412386 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:38:09.412995 1775128 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:38:09.413015 1775128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:38:09.413033 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:38:09.413941 1775128 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:38:09.414312 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0127 12:38:09.415022 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.415259 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:38:09.415282 1775128 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:38:09.415303 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:38:09.415852 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.415871 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.416529 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.416918 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.417540 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.417577 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:38:09.417626 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.417625 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:38:09.417784 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:38:09.417939 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:38:09.418105 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:38:09.418126 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.418161 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:38:09.418400 1775128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:38:09.418446 1775128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:38:09.418715 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:38:09.418961 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:38:09.419131 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:38:09.419188 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.419341 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:38:09.419872 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:38:09.419899 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.419939 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:38:09.420118 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:38:09.420264 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:38:09.420497 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:38:09.434020 1775128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0127 12:38:09.434473 1775128 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:38:09.434996 1775128 main.go:141] libmachine: Using API Version  1
	I0127 12:38:09.435011 1775128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:38:09.435261 1775128 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:38:09.435433 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetState
	I0127 12:38:09.436723 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .DriverName
	I0127 12:38:09.436939 1775128 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:38:09.436960 1775128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:38:09.436985 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHHostname
	I0127 12:38:09.439888 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.440319 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:56:40", ip: ""} in network mk-default-k8s-diff-port-485564: {Iface:virbr3 ExpiryTime:2025-01-27 13:32:56 +0000 UTC Type:0 Mac:52:54:00:60:56:40 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:default-k8s-diff-port-485564 Clientid:01:52:54:00:60:56:40}
	I0127 12:38:09.440342 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | domain default-k8s-diff-port-485564 has defined IP address 192.168.61.190 and MAC address 52:54:00:60:56:40 in network mk-default-k8s-diff-port-485564
	I0127 12:38:09.440511 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHPort
	I0127 12:38:09.440654 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHKeyPath
	I0127 12:38:09.440781 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .GetSSHUsername
	I0127 12:38:09.440863 1775128 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/default-k8s-diff-port-485564/id_rsa Username:docker}
	I0127 12:38:09.596488 1775128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:38:09.624797 1775128 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-485564" to be "Ready" ...
	I0127 12:38:09.635538 1775128 node_ready.go:49] node "default-k8s-diff-port-485564" has status "Ready":"True"
	I0127 12:38:09.635570 1775128 node_ready.go:38] duration metric: took 10.731553ms for node "default-k8s-diff-port-485564" to be "Ready" ...
	I0127 12:38:09.635579 1775128 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:38:09.643119 1775128 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:09.677648 1775128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:38:09.717523 1775128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:38:09.774499 1775128 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:38:09.774535 1775128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:38:09.789471 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:38:09.789497 1775128 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:38:09.835699 1775128 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:38:09.835730 1775128 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:38:09.897880 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:38:09.897917 1775128 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:38:09.959615 1775128 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:38:09.959646 1775128 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:38:09.981292 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:38:09.981324 1775128 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:38:10.027887 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:38:10.027922 1775128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:38:10.033987 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:10.034012 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:10.034331 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:10.034353 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:10.034363 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:10.034372 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:10.034581 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:10.034596 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:10.044731 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:10.044751 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:10.045030 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Closing plugin on server side
	I0127 12:38:10.045037 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:10.045052 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:10.055195 1775128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:38:10.060545 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:38:10.060563 1775128 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:38:10.088154 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:38:10.088187 1775128 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:38:10.133583 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:38:10.133606 1775128 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:38:10.222005 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:38:10.222038 1775128 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:38:10.250875 1775128 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:38:10.250908 1775128 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:38:10.328983 1775128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:38:10.941427 1775128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.223863098s)
	I0127 12:38:10.941484 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:10.941497 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:10.941801 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:10.941833 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:10.941842 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:10.941849 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:10.942082 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:10.942127 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:10.942131 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Closing plugin on server side
	I0127 12:38:11.317071 1775128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.261830523s)
	I0127 12:38:11.317157 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:11.317181 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:11.317601 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:11.317623 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:11.317634 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Closing plugin on server side
	I0127 12:38:11.317640 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:11.317728 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:11.317948 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Closing plugin on server side
	I0127 12:38:11.317982 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:11.317989 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:11.318000 1775128 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-485564"
	I0127 12:38:11.665731 1775128 pod_ready.go:103] pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:38:12.373356 1775128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.044305404s)
	I0127 12:38:12.373442 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:12.373462 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:12.373891 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) DBG | Closing plugin on server side
	I0127 12:38:12.373983 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:12.374001 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:12.374014 1775128 main.go:141] libmachine: Making call to close driver server
	I0127 12:38:12.374026 1775128 main.go:141] libmachine: (default-k8s-diff-port-485564) Calling .Close
	I0127 12:38:12.374320 1775128 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:38:12.374344 1775128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:38:12.376061 1775128 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-485564 addons enable metrics-server
	
	I0127 12:38:12.377495 1775128 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:38:12.378495 1775128 addons.go:514] duration metric: took 3.012402563s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:38:14.151807 1775128 pod_ready.go:103] pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:38:16.649253 1775128 pod_ready.go:103] pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:38:18.653790 1775128 pod_ready.go:103] pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"False"
	I0127 12:38:20.626032 1775128 pod_ready.go:93] pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"True"
	I0127 12:38:20.626074 1775128 pod_ready.go:82] duration metric: took 10.982928372s for pod "etcd-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:20.626096 1775128 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.126143 1775128 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"True"
	I0127 12:38:21.126191 1775128 pod_ready.go:82] duration metric: took 500.084558ms for pod "kube-apiserver-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.126208 1775128 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.148018 1775128 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"True"
	I0127 12:38:21.148057 1775128 pod_ready.go:82] duration metric: took 21.839429ms for pod "kube-controller-manager-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.148073 1775128 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.163352 1775128 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace has status "Ready":"True"
	I0127 12:38:21.163388 1775128 pod_ready.go:82] duration metric: took 15.30479ms for pod "kube-scheduler-default-k8s-diff-port-485564" in "kube-system" namespace to be "Ready" ...
	I0127 12:38:21.163402 1775128 pod_ready.go:39] duration metric: took 11.527812324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:38:21.163426 1775128 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:38:21.163492 1775128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:38:21.191911 1775128 api_server.go:72] duration metric: took 11.825929141s to wait for apiserver process to appear ...
	I0127 12:38:21.191945 1775128 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:38:21.191970 1775128 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8444/healthz ...
	I0127 12:38:21.199904 1775128 api_server.go:279] https://192.168.61.190:8444/healthz returned 200:
	ok
	I0127 12:38:21.201692 1775128 api_server.go:141] control plane version: v1.32.1
	I0127 12:38:21.201719 1775128 api_server.go:131] duration metric: took 9.766421ms to wait for apiserver health ...
	I0127 12:38:21.201730 1775128 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:38:21.211303 1775128 system_pods.go:59] 9 kube-system pods found
	I0127 12:38:21.211338 1775128 system_pods.go:61] "coredns-668d6bf9bc-sqbf8" [151241be-6a72-4400-b65e-8ce91d8b7778] Running
	I0127 12:38:21.211347 1775128 system_pods.go:61] "coredns-668d6bf9bc-tn2kk" [58897b28-3c69-4c27-bbb5-c5f40f29fc79] Running
	I0127 12:38:21.211354 1775128 system_pods.go:61] "etcd-default-k8s-diff-port-485564" [c3e2ccc8-1549-4c47-97b5-0e090da3f829] Running
	I0127 12:38:21.211360 1775128 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-485564" [dbfadca5-12c3-40ab-9267-5b834027acce] Running
	I0127 12:38:21.211364 1775128 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-485564" [488401a4-f6d7-4b51-8f07-53b987e006d6] Running
	I0127 12:38:21.211367 1775128 system_pods.go:61] "kube-proxy-sms7c" [5eac7f36-acf3-4d10-b37a-a8fb1d46b787] Running
	I0127 12:38:21.211372 1775128 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-485564" [b875e644-bece-4e7a-a772-e1e754f4cfbe] Running
	I0127 12:38:21.211380 1775128 system_pods.go:61] "metrics-server-f79f97bbb-x9qcz" [a29b3256-0775-4c65-b7fb-706574cf8487] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:38:21.211386 1775128 system_pods.go:61] "storage-provisioner" [b23604bf-cd27-429a-8b5b-f5a6f6de713d] Running
	I0127 12:38:21.211397 1775128 system_pods.go:74] duration metric: took 9.660085ms to wait for pod list to return data ...
	I0127 12:38:21.211411 1775128 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:38:21.213845 1775128 default_sa.go:45] found service account: "default"
	I0127 12:38:21.213875 1775128 default_sa.go:55] duration metric: took 2.455549ms for default service account to be created ...
	I0127 12:38:21.213886 1775128 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:38:21.219345 1775128 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-485564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485564 -n default-k8s-diff-port-485564
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-485564 logs -n 25
E0127 13:00:05.761100 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-485564 logs -n 25: (1.306815915s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo docker                         | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo find                           | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo crio                           | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-956477                                     | bridge-956477          | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	| delete  | -p old-k8s-version-488586                            | old-k8s-version-488586 | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC | 27 Jan 25 12:57 UTC |
	| delete  | -p no-preload-472479                                 | no-preload-472479      | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:48:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:48:45.061131 1790192 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:48:45.061460 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061507 1790192 out.go:358] Setting ErrFile to fd 2...
	I0127 12:48:45.061571 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061947 1790192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:48:45.062550 1790192 out.go:352] Setting JSON to false
	I0127 12:48:45.063760 1790192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34266,"bootTime":1737947859,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:48:45.063872 1790192 start.go:139] virtualization: kvm guest
	I0127 12:48:45.065969 1790192 out.go:177] * [bridge-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:48:45.067136 1790192 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:48:45.067134 1790192 notify.go:220] Checking for updates...
	I0127 12:48:45.068296 1790192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:48:45.069519 1790192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:48:45.070522 1790192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.071653 1790192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:48:45.072745 1790192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:48:45.074387 1790192 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074542 1790192 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074661 1790192 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:48:45.074797 1790192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:48:45.111354 1790192 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:48:45.112385 1790192 start.go:297] selected driver: kvm2
	I0127 12:48:45.112404 1790192 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:48:45.112417 1790192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:48:45.113111 1790192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.113192 1790192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:48:45.129191 1790192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:48:45.129247 1790192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:48:45.129509 1790192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:48:45.129542 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:48:45.129550 1790192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:48:45.129616 1790192 start.go:340] cluster config:
	{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:48:45.129762 1790192 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.131229 1790192 out.go:177] * Starting "bridge-956477" primary control-plane node in "bridge-956477" cluster
	I0127 12:48:45.132207 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:48:45.132243 1790192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:48:45.132258 1790192 cache.go:56] Caching tarball of preloaded images
	I0127 12:48:45.132337 1790192 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:48:45.132351 1790192 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:48:45.132455 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:48:45.132478 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json: {Name:mka55a4b4af7aaf9911ae593f9f5e3f84a3441e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:48:45.133024 1790192 start.go:360] acquireMachinesLock for bridge-956477: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:48:45.133083 1790192 start.go:364] duration metric: took 34.753µs to acquireMachinesLock for "bridge-956477"
	I0127 12:48:45.133110 1790192 start.go:93] Provisioning new machine with config: &{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:48:45.133187 1790192 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:48:45.134561 1790192 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:48:45.134690 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:48:45.134731 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:48:45.149509 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0127 12:48:45.150027 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:48:45.150619 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:48:45.150641 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:48:45.150972 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:48:45.151149 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:48:45.151259 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:48:45.151400 1790192 start.go:159] libmachine.API.Create for "bridge-956477" (driver="kvm2")
	I0127 12:48:45.151431 1790192 client.go:168] LocalClient.Create starting
	I0127 12:48:45.151462 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:48:45.151502 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151518 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151583 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:48:45.151607 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151621 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151653 1790192 main.go:141] libmachine: Running pre-create checks...
	I0127 12:48:45.151666 1790192 main.go:141] libmachine: (bridge-956477) Calling .PreCreateCheck
	I0127 12:48:45.152022 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:48:45.152404 1790192 main.go:141] libmachine: Creating machine...
	I0127 12:48:45.152417 1790192 main.go:141] libmachine: (bridge-956477) Calling .Create
	I0127 12:48:45.152533 1790192 main.go:141] libmachine: (bridge-956477) creating KVM machine...
	I0127 12:48:45.152554 1790192 main.go:141] libmachine: (bridge-956477) creating network...
	I0127 12:48:45.153709 1790192 main.go:141] libmachine: (bridge-956477) DBG | found existing default KVM network
	I0127 12:48:45.154981 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.154812 1790215 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:48:45.156047 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.155949 1790215 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:0f:53} reservation:<nil>}
	I0127 12:48:45.156973 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.156878 1790215 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:57:68} reservation:<nil>}
	I0127 12:48:45.158158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.158076 1790215 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039efc0}
	I0127 12:48:45.158183 1790192 main.go:141] libmachine: (bridge-956477) DBG | created network xml: 
	I0127 12:48:45.158196 1790192 main.go:141] libmachine: (bridge-956477) DBG | <network>
	I0127 12:48:45.158206 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <name>mk-bridge-956477</name>
	I0127 12:48:45.158211 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <dns enable='no'/>
	I0127 12:48:45.158215 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158222 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 12:48:45.158232 1790192 main.go:141] libmachine: (bridge-956477) DBG |     <dhcp>
	I0127 12:48:45.158241 1790192 main.go:141] libmachine: (bridge-956477) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 12:48:45.158250 1790192 main.go:141] libmachine: (bridge-956477) DBG |     </dhcp>
	I0127 12:48:45.158258 1790192 main.go:141] libmachine: (bridge-956477) DBG |   </ip>
	I0127 12:48:45.158266 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158275 1790192 main.go:141] libmachine: (bridge-956477) DBG | </network>
	I0127 12:48:45.158288 1790192 main.go:141] libmachine: (bridge-956477) DBG | 
	I0127 12:48:45.163152 1790192 main.go:141] libmachine: (bridge-956477) DBG | trying to create private KVM network mk-bridge-956477 192.168.72.0/24...
	I0127 12:48:45.234336 1790192 main.go:141] libmachine: (bridge-956477) DBG | private KVM network mk-bridge-956477 192.168.72.0/24 created
	I0127 12:48:45.234373 1790192 main.go:141] libmachine: (bridge-956477) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.234401 1790192 main.go:141] libmachine: (bridge-956477) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:48:45.234417 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.234378 1790215 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.234566 1790192 main.go:141] libmachine: (bridge-956477) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:48:45.542800 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.542627 1790215 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa...
	I0127 12:48:45.665840 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665684 1790215 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk...
	I0127 12:48:45.665878 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing magic tar header
	I0127 12:48:45.665895 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing SSH key tar header
	I0127 12:48:45.665905 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665802 1790215 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.665915 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 (perms=drwx------)
	I0127 12:48:45.665924 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477
	I0127 12:48:45.665934 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:48:45.665954 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.665963 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:48:45.665979 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:48:45.665993 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:48:45.666023 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:48:45.666045 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins
	I0127 12:48:45.666058 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:48:45.666069 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:48:45.666074 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:48:45.666085 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:45.666092 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home
	I0127 12:48:45.666099 1790192 main.go:141] libmachine: (bridge-956477) DBG | skipping /home - not owner
	I0127 12:48:45.667183 1790192 main.go:141] libmachine: (bridge-956477) define libvirt domain using xml: 
	I0127 12:48:45.667207 1790192 main.go:141] libmachine: (bridge-956477) <domain type='kvm'>
	I0127 12:48:45.667217 1790192 main.go:141] libmachine: (bridge-956477)   <name>bridge-956477</name>
	I0127 12:48:45.667225 1790192 main.go:141] libmachine: (bridge-956477)   <memory unit='MiB'>3072</memory>
	I0127 12:48:45.667233 1790192 main.go:141] libmachine: (bridge-956477)   <vcpu>2</vcpu>
	I0127 12:48:45.667241 1790192 main.go:141] libmachine: (bridge-956477)   <features>
	I0127 12:48:45.667252 1790192 main.go:141] libmachine: (bridge-956477)     <acpi/>
	I0127 12:48:45.667256 1790192 main.go:141] libmachine: (bridge-956477)     <apic/>
	I0127 12:48:45.667262 1790192 main.go:141] libmachine: (bridge-956477)     <pae/>
	I0127 12:48:45.667266 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667283 1790192 main.go:141] libmachine: (bridge-956477)   </features>
	I0127 12:48:45.667291 1790192 main.go:141] libmachine: (bridge-956477)   <cpu mode='host-passthrough'>
	I0127 12:48:45.667311 1790192 main.go:141] libmachine: (bridge-956477)   
	I0127 12:48:45.667327 1790192 main.go:141] libmachine: (bridge-956477)   </cpu>
	I0127 12:48:45.667351 1790192 main.go:141] libmachine: (bridge-956477)   <os>
	I0127 12:48:45.667372 1790192 main.go:141] libmachine: (bridge-956477)     <type>hvm</type>
	I0127 12:48:45.667389 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='cdrom'/>
	I0127 12:48:45.667405 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='hd'/>
	I0127 12:48:45.667416 1790192 main.go:141] libmachine: (bridge-956477)     <bootmenu enable='no'/>
	I0127 12:48:45.667423 1790192 main.go:141] libmachine: (bridge-956477)   </os>
	I0127 12:48:45.667433 1790192 main.go:141] libmachine: (bridge-956477)   <devices>
	I0127 12:48:45.667441 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='cdrom'>
	I0127 12:48:45.667452 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/boot2docker.iso'/>
	I0127 12:48:45.667459 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hdc' bus='scsi'/>
	I0127 12:48:45.667464 1790192 main.go:141] libmachine: (bridge-956477)       <readonly/>
	I0127 12:48:45.667470 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667480 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='disk'>
	I0127 12:48:45.667502 1790192 main.go:141] libmachine: (bridge-956477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:48:45.667514 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk'/>
	I0127 12:48:45.667519 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hda' bus='virtio'/>
	I0127 12:48:45.667527 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667531 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667537 1790192 main.go:141] libmachine: (bridge-956477)       <source network='mk-bridge-956477'/>
	I0127 12:48:45.667544 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667549 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667555 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667582 1790192 main.go:141] libmachine: (bridge-956477)       <source network='default'/>
	I0127 12:48:45.667600 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667613 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667621 1790192 main.go:141] libmachine: (bridge-956477)     <serial type='pty'>
	I0127 12:48:45.667633 1790192 main.go:141] libmachine: (bridge-956477)       <target port='0'/>
	I0127 12:48:45.667640 1790192 main.go:141] libmachine: (bridge-956477)     </serial>
	I0127 12:48:45.667651 1790192 main.go:141] libmachine: (bridge-956477)     <console type='pty'>
	I0127 12:48:45.667662 1790192 main.go:141] libmachine: (bridge-956477)       <target type='serial' port='0'/>
	I0127 12:48:45.667673 1790192 main.go:141] libmachine: (bridge-956477)     </console>
	I0127 12:48:45.667691 1790192 main.go:141] libmachine: (bridge-956477)     <rng model='virtio'>
	I0127 12:48:45.667705 1790192 main.go:141] libmachine: (bridge-956477)       <backend model='random'>/dev/random</backend>
	I0127 12:48:45.667714 1790192 main.go:141] libmachine: (bridge-956477)     </rng>
	I0127 12:48:45.667722 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667731 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667740 1790192 main.go:141] libmachine: (bridge-956477)   </devices>
	I0127 12:48:45.667749 1790192 main.go:141] libmachine: (bridge-956477) </domain>
	I0127 12:48:45.667765 1790192 main.go:141] libmachine: (bridge-956477) 
	I0127 12:48:45.672524 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:ac:62:83 in network default
	I0127 12:48:45.673006 1790192 main.go:141] libmachine: (bridge-956477) starting domain...
	I0127 12:48:45.673024 1790192 main.go:141] libmachine: (bridge-956477) ensuring networks are active...
	I0127 12:48:45.673031 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:45.673650 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network default is active
	I0127 12:48:45.673918 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network mk-bridge-956477 is active
	I0127 12:48:45.674443 1790192 main.go:141] libmachine: (bridge-956477) getting domain XML...
	I0127 12:48:45.675241 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:46.910072 1790192 main.go:141] libmachine: (bridge-956477) waiting for IP...
	I0127 12:48:46.910991 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:46.911503 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:46.911587 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:46.911518 1790215 retry.go:31] will retry after 215.854927ms: waiting for domain to come up
	I0127 12:48:47.128865 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.129422 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.129454 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.129389 1790215 retry.go:31] will retry after 345.744835ms: waiting for domain to come up
	I0127 12:48:47.476809 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.477321 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.477351 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.477304 1790215 retry.go:31] will retry after 387.587044ms: waiting for domain to come up
	I0127 12:48:47.867011 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.867519 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.867563 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.867512 1790215 retry.go:31] will retry after 564.938674ms: waiting for domain to come up
	I0127 12:48:48.434398 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:48.434970 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:48.434999 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:48.434928 1790215 retry.go:31] will retry after 628.439712ms: waiting for domain to come up
	I0127 12:48:49.064853 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.065323 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.065358 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.065288 1790215 retry.go:31] will retry after 745.70592ms: waiting for domain to come up
	I0127 12:48:49.813123 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.813748 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.813780 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.813723 1790215 retry.go:31] will retry after 1.074334161s: waiting for domain to come up
	I0127 12:48:50.889220 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:50.889785 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:50.889855 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:50.889789 1790215 retry.go:31] will retry after 1.318459201s: waiting for domain to come up
	I0127 12:48:52.210197 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:52.210618 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:52.210645 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:52.210599 1790215 retry.go:31] will retry after 1.764815725s: waiting for domain to come up
	I0127 12:48:53.976580 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:53.977130 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:53.977158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:53.977081 1790215 retry.go:31] will retry after 1.410873374s: waiting for domain to come up
	I0127 12:48:55.389480 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:55.389911 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:55.389944 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:55.389893 1790215 retry.go:31] will retry after 2.738916299s: waiting for domain to come up
	I0127 12:48:58.130207 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:58.130681 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:58.130707 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:58.130646 1790215 retry.go:31] will retry after 3.218706779s: waiting for domain to come up
	I0127 12:49:01.351430 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:01.351988 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:49:01.352019 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:49:01.351955 1790215 retry.go:31] will retry after 4.065804066s: waiting for domain to come up
	I0127 12:49:05.419663 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420108 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has current primary IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420160 1790192 main.go:141] libmachine: (bridge-956477) found domain IP: 192.168.72.28
	I0127 12:49:05.420175 1790192 main.go:141] libmachine: (bridge-956477) reserving static IP address...
	I0127 12:49:05.420595 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find host DHCP lease matching {name: "bridge-956477", mac: "52:54:00:49:99:d8", ip: "192.168.72.28"} in network mk-bridge-956477
	I0127 12:49:05.499266 1790192 main.go:141] libmachine: (bridge-956477) reserved static IP address 192.168.72.28 for domain bridge-956477
	I0127 12:49:05.499303 1790192 main.go:141] libmachine: (bridge-956477) waiting for SSH...
	I0127 12:49:05.499314 1790192 main.go:141] libmachine: (bridge-956477) DBG | Getting to WaitForSSH function...
	I0127 12:49:05.501992 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502523 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.502574 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502769 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH client type: external
	I0127 12:49:05.502798 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa (-rw-------)
	I0127 12:49:05.502836 1790192 main.go:141] libmachine: (bridge-956477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:49:05.502851 1790192 main.go:141] libmachine: (bridge-956477) DBG | About to run SSH command:
	I0127 12:49:05.502863 1790192 main.go:141] libmachine: (bridge-956477) DBG | exit 0
	I0127 12:49:05.630859 1790192 main.go:141] libmachine: (bridge-956477) DBG | SSH cmd err, output: <nil>: 
	I0127 12:49:05.631203 1790192 main.go:141] libmachine: (bridge-956477) KVM machine creation complete
	I0127 12:49:05.631537 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:05.632120 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632328 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632512 1790192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:49:05.632550 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:05.633838 1790192 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:49:05.633852 1790192 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:49:05.633858 1790192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:49:05.633864 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.635988 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636359 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.636387 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636482 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.636688 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636840 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636999 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.637148 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.637417 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.637432 1790192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:49:05.753913 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:05.753957 1790192 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:49:05.753969 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.757035 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757484 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.757521 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757749 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.757961 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758132 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758270 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.758481 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.758721 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.758739 1790192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:49:05.871011 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:49:05.871181 1790192 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:49:05.871198 1790192 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:49:05.871211 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871499 1790192 buildroot.go:166] provisioning hostname "bridge-956477"
	I0127 12:49:05.871532 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871711 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.874488 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.874941 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.874964 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.875152 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.875328 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875456 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875555 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.875684 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.875864 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.875875 1790192 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-956477 && echo "bridge-956477" | sudo tee /etc/hostname
	I0127 12:49:05.999963 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-956477
	
	I0127 12:49:06.000010 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.002594 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003041 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.003070 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003263 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.003462 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003628 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003746 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.003889 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.004099 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.004116 1790192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-956477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-956477/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-956477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:49:06.126689 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:06.126724 1790192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:49:06.126788 1790192 buildroot.go:174] setting up certificates
	I0127 12:49:06.126798 1790192 provision.go:84] configureAuth start
	I0127 12:49:06.126811 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:06.127071 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.129597 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.129936 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.129956 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.130134 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.132135 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132428 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.132453 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132601 1790192 provision.go:143] copyHostCerts
	I0127 12:49:06.132670 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:49:06.132693 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:49:06.132778 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:49:06.132883 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:49:06.132896 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:49:06.132941 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:49:06.133012 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:49:06.133023 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:49:06.133056 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:49:06.133127 1790192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.bridge-956477 san=[127.0.0.1 192.168.72.28 bridge-956477 localhost minikube]
	I0127 12:49:06.244065 1790192 provision.go:177] copyRemoteCerts
	I0127 12:49:06.244134 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:49:06.244179 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.247068 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247401 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.247439 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247543 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.247734 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.247886 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.248045 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.332164 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:49:06.355222 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:49:06.377606 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:49:06.400935 1790192 provision.go:87] duration metric: took 274.121357ms to configureAuth
	I0127 12:49:06.400966 1790192 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:49:06.401190 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:06.401304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.403876 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404282 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.404311 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404522 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.404717 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.404875 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.405024 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.405242 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.405432 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.405453 1790192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:49:06.632004 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:49:06.632052 1790192 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:49:06.632066 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetURL
	I0127 12:49:06.633455 1790192 main.go:141] libmachine: (bridge-956477) DBG | using libvirt version 6000000
	I0127 12:49:06.635940 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636296 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.636319 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636439 1790192 main.go:141] libmachine: Docker is up and running!
	I0127 12:49:06.636466 1790192 main.go:141] libmachine: Reticulating splines...
	I0127 12:49:06.636474 1790192 client.go:171] duration metric: took 21.485034654s to LocalClient.Create
	I0127 12:49:06.636493 1790192 start.go:167] duration metric: took 21.485094344s to libmachine.API.Create "bridge-956477"
	I0127 12:49:06.636508 1790192 start.go:293] postStartSetup for "bridge-956477" (driver="kvm2")
	I0127 12:49:06.636525 1790192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:49:06.636556 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.636838 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:49:06.636862 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.639069 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639386 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.639422 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639563 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.639752 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.639929 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.640062 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.724850 1790192 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:49:06.729112 1790192 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:49:06.729134 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:49:06.729192 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:49:06.729293 1790192 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:49:06.729434 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:49:06.738467 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:06.761545 1790192 start.go:296] duration metric: took 125.019791ms for postStartSetup
	I0127 12:49:06.761593 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:06.762205 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.765437 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.765808 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.765828 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.766138 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:49:06.766350 1790192 start.go:128] duration metric: took 21.63314943s to createHost
	I0127 12:49:06.766380 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.768832 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769141 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.769168 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769330 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.769547 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769745 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769899 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.770075 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.770262 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.770272 1790192 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:49:06.887120 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737982146.857755472
	
	I0127 12:49:06.887157 1790192 fix.go:216] guest clock: 1737982146.857755472
	I0127 12:49:06.887177 1790192 fix.go:229] Guest: 2025-01-27 12:49:06.857755472 +0000 UTC Remote: 2025-01-27 12:49:06.76636518 +0000 UTC m=+21.744166745 (delta=91.390292ms)
	I0127 12:49:06.887213 1790192 fix.go:200] guest clock delta is within tolerance: 91.390292ms
	I0127 12:49:06.887222 1790192 start.go:83] releasing machines lock for "bridge-956477", held for 21.754125785s
	I0127 12:49:06.887266 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.887556 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.890291 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890686 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.890715 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890834 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891309 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891479 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891572 1790192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:49:06.891614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.891715 1790192 ssh_runner.go:195] Run: cat /version.json
	I0127 12:49:06.891742 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.894127 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894492 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.894531 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894720 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894976 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895300 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.895305 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.895579 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.895614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.895836 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895831 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.896003 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.896190 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.896366 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:07.014147 1790192 ssh_runner.go:195] Run: systemctl --version
	I0127 12:49:07.020023 1790192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:49:07.181331 1790192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:49:07.186863 1790192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:49:07.186954 1790192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:49:07.203385 1790192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:49:07.203419 1790192 start.go:495] detecting cgroup driver to use...
	I0127 12:49:07.203478 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:49:07.218431 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:49:07.231459 1790192 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:49:07.231505 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:49:07.244939 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:49:07.257985 1790192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:49:07.382245 1790192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:49:07.544971 1790192 docker.go:233] disabling docker service ...
	I0127 12:49:07.545044 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:49:07.559296 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:49:07.572107 1790192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:49:07.710722 1790192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:49:07.842352 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:49:07.856902 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:49:07.873833 1790192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:49:07.873895 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.883449 1790192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:49:07.883540 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.893268 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.902934 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.913200 1790192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:49:07.923183 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.932933 1790192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.948940 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.958726 1790192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:49:07.967409 1790192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:49:07.967473 1790192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:49:07.979872 1790192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:49:07.988693 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:08.106626 1790192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:49:08.190261 1790192 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:49:08.190341 1790192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:49:08.195228 1790192 start.go:563] Will wait 60s for crictl version
	I0127 12:49:08.195312 1790192 ssh_runner.go:195] Run: which crictl
	I0127 12:49:08.198797 1790192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:49:08.237887 1790192 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:49:08.238012 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.263030 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.290320 1790192 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:49:08.291370 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:08.294322 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294643 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:08.294675 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294858 1790192 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 12:49:08.298640 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:08.311920 1790192 kubeadm.go:883] updating cluster {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:49:08.312091 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:49:08.312156 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:08.343416 1790192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:49:08.343484 1790192 ssh_runner.go:195] Run: which lz4
	I0127 12:49:08.347177 1790192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:49:08.351091 1790192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:49:08.351126 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:49:09.560777 1790192 crio.go:462] duration metric: took 1.213632525s to copy over tarball
	I0127 12:49:09.560892 1790192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:49:11.737884 1790192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176958842s)
	I0127 12:49:11.737916 1790192 crio.go:469] duration metric: took 2.177103692s to extract the tarball
	I0127 12:49:11.737927 1790192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:49:11.774005 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:11.812704 1790192 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:49:11.812729 1790192 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:49:11.812737 1790192 kubeadm.go:934] updating node { 192.168.72.28 8443 v1.32.1 crio true true} ...
	I0127 12:49:11.812874 1790192 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-956477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 12:49:11.812971 1790192 ssh_runner.go:195] Run: crio config
	I0127 12:49:11.868174 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:11.868200 1790192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:49:11.868222 1790192 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-956477 NodeName:bridge-956477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:49:11.868356 1790192 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-956477"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:49:11.868420 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:49:11.877576 1790192 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:49:11.877641 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:49:11.886156 1790192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 12:49:11.901855 1790192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:49:11.917311 1790192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:49:11.933025 1790192 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0127 12:49:11.936616 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:11.948439 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:12.060451 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:12.076612 1790192 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477 for IP: 192.168.72.28
	I0127 12:49:12.076638 1790192 certs.go:194] generating shared ca certs ...
	I0127 12:49:12.076680 1790192 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.076872 1790192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:49:12.076941 1790192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:49:12.076955 1790192 certs.go:256] generating profile certs ...
	I0127 12:49:12.077065 1790192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key
	I0127 12:49:12.077096 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt with IP's: []
	I0127 12:49:12.388180 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt ...
	I0127 12:49:12.388212 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: {Name:mk35e754849912c2ccbef7aee78a8cb664d71760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393143 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key ...
	I0127 12:49:12.393176 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key: {Name:mk1a4eb1684f2df27d8a0393e4c3ccce9e3de875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393803 1790192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9
	I0127 12:49:12.393834 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.28]
	I0127 12:49:12.504705 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 ...
	I0127 12:49:12.504741 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9: {Name:mkc470d67580d2e81bf8ee097c21f9b4e89d97ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.504924 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 ...
	I0127 12:49:12.504944 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9: {Name:mkfe8a7bf14247bc7909277acbea55dbda14424f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.505661 1790192 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt
	I0127 12:49:12.505776 1790192 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key
	I0127 12:49:12.505863 1790192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key
	I0127 12:49:12.505887 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt with IP's: []
	I0127 12:49:12.609829 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt ...
	I0127 12:49:12.609856 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt: {Name:mk6cb77c1a7b511e7130b2dd7423c6ba9c6d37ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.610644 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key ...
	I0127 12:49:12.610664 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key: {Name:mkd90fcc60d00c9236b383668f8a16c0de9554e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.614971 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:49:12.615016 1790192 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:49:12.615026 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:49:12.615065 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:49:12.615119 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:49:12.615159 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:49:12.615202 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:12.615902 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:49:12.642386 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:49:12.667109 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:49:12.688637 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:49:12.711307 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:49:12.732852 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:49:12.756599 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:49:12.812442 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:49:12.836060 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:49:12.857115 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:49:12.879108 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:49:12.900872 1790192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:49:12.917407 1790192 ssh_runner.go:195] Run: openssl version
	I0127 12:49:12.922608 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:49:12.933376 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937409 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937451 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.942881 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:49:12.953628 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:49:12.964554 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968534 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968581 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.973893 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:49:12.984546 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:49:12.994913 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998791 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998841 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:13.003870 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:49:13.013262 1790192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:49:13.016784 1790192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:49:13.016833 1790192 kubeadm.go:392] StartCluster: {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:49:13.016911 1790192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:49:13.016987 1790192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:49:13.050812 1790192 cri.go:89] found id: ""
	I0127 12:49:13.050889 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:49:13.059865 1790192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:49:13.068783 1790192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:49:13.077676 1790192 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:49:13.077698 1790192 kubeadm.go:157] found existing configuration files:
	
	I0127 12:49:13.077743 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:49:13.086826 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:49:13.086886 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:49:13.096763 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:49:13.106090 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:49:13.106152 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:49:13.115056 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.123311 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:49:13.123381 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.134697 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:49:13.145287 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:49:13.145360 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:49:13.156930 1790192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:49:13.215215 1790192 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:49:13.215384 1790192 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:49:13.321518 1790192 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:49:13.321678 1790192 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:49:13.321803 1790192 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:49:13.332363 1790192 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:49:13.473799 1790192 out.go:235]   - Generating certificates and keys ...
	I0127 12:49:13.473979 1790192 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:49:13.474081 1790192 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:49:13.685866 1790192 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:49:13.770778 1790192 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:49:14.148126 1790192 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:49:14.239549 1790192 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:49:14.286201 1790192 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:49:14.286341 1790192 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.383724 1790192 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:49:14.383950 1790192 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.501996 1790192 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:49:14.665536 1790192 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:49:14.804446 1790192 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:49:14.804529 1790192 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:49:14.897657 1790192 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:49:14.966489 1790192 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:49:15.104336 1790192 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:49:15.164491 1790192 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:49:15.350906 1790192 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:49:15.351563 1790192 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:49:15.354014 1790192 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:49:15.355551 1790192 out.go:235]   - Booting up control plane ...
	I0127 12:49:15.355691 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:49:15.355786 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:49:15.356057 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:49:15.370685 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:49:15.376916 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:49:15.377006 1790192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:49:15.515590 1790192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:49:15.515750 1790192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:49:16.516381 1790192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001998745s
	I0127 12:49:16.516512 1790192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:49:21.514222 1790192 kubeadm.go:310] [api-check] The API server is healthy after 5.001594227s
	I0127 12:49:21.532591 1790192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:49:21.554627 1790192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:49:21.596778 1790192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:49:21.597017 1790192 kubeadm.go:310] [mark-control-plane] Marking the node bridge-956477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:49:21.613382 1790192 kubeadm.go:310] [bootstrap-token] Using token: y217q3.atj9ddkanm9dqcqt
	I0127 12:49:21.614522 1790192 out.go:235]   - Configuring RBAC rules ...
	I0127 12:49:21.614665 1790192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:49:21.626049 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:49:21.635045 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:49:21.642711 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:49:21.646716 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:49:21.650577 1790192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:49:21.921382 1790192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:49:22.339910 1790192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:49:22.920294 1790192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:49:22.921302 1790192 kubeadm.go:310] 
	I0127 12:49:22.921394 1790192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:49:22.921411 1790192 kubeadm.go:310] 
	I0127 12:49:22.921499 1790192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:49:22.921508 1790192 kubeadm.go:310] 
	I0127 12:49:22.921542 1790192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:49:22.921642 1790192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:49:22.921726 1790192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:49:22.921741 1790192 kubeadm.go:310] 
	I0127 12:49:22.921806 1790192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:49:22.921817 1790192 kubeadm.go:310] 
	I0127 12:49:22.921886 1790192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:49:22.921897 1790192 kubeadm.go:310] 
	I0127 12:49:22.921961 1790192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:49:22.922086 1790192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:49:22.922181 1790192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:49:22.922191 1790192 kubeadm.go:310] 
	I0127 12:49:22.922311 1790192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:49:22.922407 1790192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:49:22.922421 1790192 kubeadm.go:310] 
	I0127 12:49:22.922529 1790192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922664 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:49:22.922701 1790192 kubeadm.go:310] 	--control-plane 
	I0127 12:49:22.922707 1790192 kubeadm.go:310] 
	I0127 12:49:22.922801 1790192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:49:22.922809 1790192 kubeadm.go:310] 
	I0127 12:49:22.922871 1790192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922996 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:49:22.923821 1790192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:49:22.924014 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:22.926262 1790192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:49:22.927449 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:49:22.937784 1790192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:49:22.955872 1790192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:49:22.955954 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:22.956000 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-956477 minikube.k8s.io/updated_at=2025_01_27T12_49_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=bridge-956477 minikube.k8s.io/primary=true
	I0127 12:49:22.984921 1790192 ops.go:34] apiserver oom_adj: -16
	I0127 12:49:23.101816 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:23.602076 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.102582 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.601942 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.102360 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.602350 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.102161 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.602794 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.102526 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.237160 1790192 kubeadm.go:1113] duration metric: took 4.281277151s to wait for elevateKubeSystemPrivileges
	I0127 12:49:27.237200 1790192 kubeadm.go:394] duration metric: took 14.220369926s to StartCluster
	I0127 12:49:27.237228 1790192 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.237320 1790192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:49:27.238783 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.239069 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:49:27.239072 1790192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:49:27.239175 1790192 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:49:27.239310 1790192 addons.go:69] Setting storage-provisioner=true in profile "bridge-956477"
	I0127 12:49:27.239320 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:27.239330 1790192 addons.go:238] Setting addon storage-provisioner=true in "bridge-956477"
	I0127 12:49:27.239333 1790192 addons.go:69] Setting default-storageclass=true in profile "bridge-956477"
	I0127 12:49:27.239365 1790192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-956477"
	I0127 12:49:27.239371 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.239830 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239873 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.239917 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239957 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.240680 1790192 out.go:177] * Verifying Kubernetes components...
	I0127 12:49:27.241931 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:27.261385 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0127 12:49:27.261452 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0127 12:49:27.261810 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262003 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262389 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262417 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262543 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262563 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262767 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262952 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262989 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.263506 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.263537 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.266688 1790192 addons.go:238] Setting addon default-storageclass=true in "bridge-956477"
	I0127 12:49:27.266732 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.267120 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.267168 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.278963 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0127 12:49:27.279421 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.279976 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.279999 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.280431 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.280692 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.282702 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0127 12:49:27.282845 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.283179 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.283627 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.283649 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.283978 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.284748 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.284785 1790192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:49:27.284797 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.285956 1790192 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.285977 1790192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:49:27.286001 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.288697 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289087 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.289110 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.289459 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.289574 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.289669 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.301672 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0127 12:49:27.302317 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.302925 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.302949 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.303263 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.303488 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.305258 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.305479 1790192 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:27.305497 1790192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:49:27.305517 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.308750 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309243 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.309269 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309409 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.309585 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.309726 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.309875 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.500640 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:27.500778 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:49:27.538353 1790192 node_ready.go:35] waiting up to 15m0s for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548400 1790192 node_ready.go:49] node "bridge-956477" has status "Ready":"True"
	I0127 12:49:27.548443 1790192 node_ready.go:38] duration metric: took 10.053639ms for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548459 1790192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:27.564271 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:27.632137 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.647091 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:28.184542 1790192 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 12:49:28.549638 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.549663 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550103 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550127 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550137 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550144 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550198 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550409 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550429 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550443 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550800 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550816 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551057 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551076 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.551081 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.551085 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.551098 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551316 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551331 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575614 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.575665 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.575924 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.575979 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575978 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.577474 1790192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:49:28.578591 1790192 addons.go:514] duration metric: took 1.33943345s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:49:28.695806 1790192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-956477" context rescaled to 1 replicas
	I0127 12:49:29.570116 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:31.570640 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:33.572383 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:34.570677 1790192 pod_ready.go:98] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.28 HostIPs:[{IP:192.168.72.
28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021ef1f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570712 1790192 pod_ready.go:82] duration metric: took 7.006412478s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	E0127 12:49:34.570726 1790192 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
2.28 HostIPs:[{IP:192.168.72.28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0021ef1f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570736 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575210 1790192 pod_ready.go:93] pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:34.575232 1790192 pod_ready.go:82] duration metric: took 4.46563ms for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575241 1790192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082910 1790192 pod_ready.go:93] pod "etcd-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.082952 1790192 pod_ready.go:82] duration metric: took 1.507702821s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082968 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086925 1790192 pod_ready.go:93] pod "kube-apiserver-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.086953 1790192 pod_ready.go:82] duration metric: took 3.975819ms for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086969 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091952 1790192 pod_ready.go:93] pod "kube-controller-manager-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.091969 1790192 pod_ready.go:82] duration metric: took 4.993389ms for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091978 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170654 1790192 pod_ready.go:93] pod "kube-proxy-8fw2n" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.170678 1790192 pod_ready.go:82] duration metric: took 78.694605ms for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170688 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.568993 1790192 pod_ready.go:93] pod "kube-scheduler-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.569019 1790192 pod_ready.go:82] duration metric: took 398.324568ms for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.569029 1790192 pod_ready.go:39] duration metric: took 9.020555356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:36.569047 1790192 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:49:36.569110 1790192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:49:36.585221 1790192 api_server.go:72] duration metric: took 9.346111182s to wait for apiserver process to appear ...
	I0127 12:49:36.585260 1790192 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:49:36.585284 1790192 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0127 12:49:36.592716 1790192 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0127 12:49:36.594292 1790192 api_server.go:141] control plane version: v1.32.1
	I0127 12:49:36.594316 1790192 api_server.go:131] duration metric: took 9.04907ms to wait for apiserver health ...
	I0127 12:49:36.594325 1790192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:49:36.771302 1790192 system_pods.go:59] 7 kube-system pods found
	I0127 12:49:36.771341 1790192 system_pods.go:61] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:36.771347 1790192 system_pods.go:61] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:36.771353 1790192 system_pods.go:61] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:36.771358 1790192 system_pods.go:61] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:36.771363 1790192 system_pods.go:61] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:36.771368 1790192 system_pods.go:61] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:36.771372 1790192 system_pods.go:61] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:36.771382 1790192 system_pods.go:74] duration metric: took 177.049643ms to wait for pod list to return data ...
	I0127 12:49:36.771394 1790192 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:49:36.969860 1790192 default_sa.go:45] found service account: "default"
	I0127 12:49:36.969891 1790192 default_sa.go:55] duration metric: took 198.486144ms for default service account to be created ...
	I0127 12:49:36.969903 1790192 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:49:37.173813 1790192 system_pods.go:87] 7 kube-system pods found
	I0127 12:49:37.370364 1790192 system_pods.go:105] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:37.370390 1790192 system_pods.go:105] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:37.370396 1790192 system_pods.go:105] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:37.370401 1790192 system_pods.go:105] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:37.370407 1790192 system_pods.go:105] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:37.370411 1790192 system_pods.go:105] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:37.370415 1790192 system_pods.go:105] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:37.370423 1790192 system_pods.go:147] duration metric: took 400.513222ms to wait for k8s-apps to be running ...
	I0127 12:49:37.370430 1790192 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:49:37.370476 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:49:37.386578 1790192 system_svc.go:56] duration metric: took 16.134406ms WaitForService to wait for kubelet
	I0127 12:49:37.386609 1790192 kubeadm.go:582] duration metric: took 10.147508217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:49:37.386628 1790192 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:49:37.570387 1790192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:49:37.570420 1790192 node_conditions.go:123] node cpu capacity is 2
	I0127 12:49:37.570439 1790192 node_conditions.go:105] duration metric: took 183.805809ms to run NodePressure ...
	I0127 12:49:37.570455 1790192 start.go:241] waiting for startup goroutines ...
	I0127 12:49:37.570466 1790192 start.go:246] waiting for cluster config update ...
	I0127 12:49:37.570478 1790192 start.go:255] writing updated cluster config ...
	I0127 12:49:37.570833 1790192 ssh_runner.go:195] Run: rm -f paused
	I0127 12:49:37.621383 1790192 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:49:37.623996 1790192 out.go:177] * Done! kubectl is now configured to use "bridge-956477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.231495917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805231471757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15e9c350-7ef2-4627-ba72-1531413e96cc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.231954086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d30ed12a-a2dc-4b9e-ba03-c81a5e183a4d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.232002859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d30ed12a-a2dc-4b9e-ba03-c81a5e183a4d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.232293327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf,PodSandboxId:d221910ccdbdc176447eb0056caeb3055c79ab233f791aaa262a0047ee6540e7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982767399335295,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-jxz8s,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e39f19cc-1743-41e1-9b00-b0de4c5d5748,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073593dc64ed8c4f2e4e156fadb3c31f230a3fe7df8b0a5d2ec26688f611f453,PodSandboxId:1630f6d022340e1396336d344ca48e08578c33d62bf8b3d78ede287b4eafd10a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981505317375314,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gtmkz,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 9e31ab1f-1c0c-452b-933f-967256168b85,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae7bdcd82dc1daf0185bde5b83e714ec34737442d44431e4376ccddf25a4330,PodSandboxId:caec2fe7d55db7669aa8bc1b02375e4ccedece5e41c7ce2f0e02095eddf52049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981491545629921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23604bf-cd27-429a-8b5b-f5a6f6de713d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba093a4071d8a1ab8f118b59a2aecbae9887b7ef9a2d784d616f7053597a718,PodSandboxId:53d2ded83601b40c4581dd782164486eee00a30520b84b6a0865a7d07ccac5e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491390447015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tn2kk,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58897b28-3c69-4c27-bbb5-c5f40f29fc79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b6451c603b9f3edbb7d999f9aeb109191302506e8cc57d639d012442d7a9c5,PodSandboxId:52225a404955c7ea2ceb98f48cc03531c38d02e8d3d81f6c8c19ce48639c11b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491331766995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqbf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151241be-6a72-4400-b65e-8ce91d8b7778,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1234830a6fad276435ac909ed85a893786939862c38b618f68158573636482d,PodSandboxId:f9e867e77da7418197acf0476b0a86b3d55370d7e8b2244ec3eb052ee695a65b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981490407843422,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sms7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eac7f36-acf3-4d10-b37a-a8fb1d46b787,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c21f596c10ac31d7c0256b2dbfd0dba7a3e792b6d4e299f883ca5196c8ce56,PodSandboxId:f6d6255a1c3fa42cd360a090144e154a9b2d45c829e88570b3e6ad110ac1f8d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981479706152147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0020e19914873dd683a4e748f4887eec,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85452a24439b4b5d2c041ac84e66cbc2b001577785deeca8217d7c4ea0c57f8b,PodSandboxId:bf94bc53a7906eae3248e9b3b2db1e38b51f22b137bbe901ac69fad537809976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981479699749768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdf8b23818828a0bc428856036f48db,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c6fa36d08719c8a8598419b48e2cd83da68ad7d302a6718af8e3e307a9f45b3,PodSandboxId:27c14bf7596dfc9a88c459447eb1ad301294f2e3666067b1b35ef2006354bdad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981479733238941,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebaf93444f262ea2bed52b8eba552037,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54806d35c5f22b1c741d1dd6043e05c5239232b94bca0c86396df38d012d7b9,PodSandboxId:c4280c0ae131c5410c473c298431e588f505e883dc6bcebb0b840b014e2f80b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981479671575387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21ec5ff154ba410ffb2394af4c71ce440577fd90efab449350801104cdbaff,PodSandboxId:a2fc9b3ed040132add530fa4a820303b64919c0486c53ff584a49a17a68f4360,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981194814101574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d30ed12a-a2dc-4b9e-ba03-c81a5e183a4d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.267221984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e75a628f-458f-463c-9df6-d4e6329808c2 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.267294444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e75a628f-458f-463c-9df6-d4e6329808c2 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.268313844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e33eb432-e920-4e14-bba9-76ab50d78475 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.268805442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805268782607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e33eb432-e920-4e14-bba9-76ab50d78475 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.269398489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d2a418a-79a3-4658-b6cc-79c4342cfc22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.269462072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d2a418a-79a3-4658-b6cc-79c4342cfc22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.269724383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf,PodSandboxId:d221910ccdbdc176447eb0056caeb3055c79ab233f791aaa262a0047ee6540e7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982767399335295,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-jxz8s,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e39f19cc-1743-41e1-9b00-b0de4c5d5748,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073593dc64ed8c4f2e4e156fadb3c31f230a3fe7df8b0a5d2ec26688f611f453,PodSandboxId:1630f6d022340e1396336d344ca48e08578c33d62bf8b3d78ede287b4eafd10a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981505317375314,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gtmkz,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 9e31ab1f-1c0c-452b-933f-967256168b85,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae7bdcd82dc1daf0185bde5b83e714ec34737442d44431e4376ccddf25a4330,PodSandboxId:caec2fe7d55db7669aa8bc1b02375e4ccedece5e41c7ce2f0e02095eddf52049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981491545629921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23604bf-cd27-429a-8b5b-f5a6f6de713d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba093a4071d8a1ab8f118b59a2aecbae9887b7ef9a2d784d616f7053597a718,PodSandboxId:53d2ded83601b40c4581dd782164486eee00a30520b84b6a0865a7d07ccac5e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491390447015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tn2kk,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58897b28-3c69-4c27-bbb5-c5f40f29fc79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b6451c603b9f3edbb7d999f9aeb109191302506e8cc57d639d012442d7a9c5,PodSandboxId:52225a404955c7ea2ceb98f48cc03531c38d02e8d3d81f6c8c19ce48639c11b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491331766995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqbf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151241be-6a72-4400-b65e-8ce91d8b7778,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1234830a6fad276435ac909ed85a893786939862c38b618f68158573636482d,PodSandboxId:f9e867e77da7418197acf0476b0a86b3d55370d7e8b2244ec3eb052ee695a65b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981490407843422,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sms7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eac7f36-acf3-4d10-b37a-a8fb1d46b787,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c21f596c10ac31d7c0256b2dbfd0dba7a3e792b6d4e299f883ca5196c8ce56,PodSandboxId:f6d6255a1c3fa42cd360a090144e154a9b2d45c829e88570b3e6ad110ac1f8d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981479706152147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0020e19914873dd683a4e748f4887eec,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85452a24439b4b5d2c041ac84e66cbc2b001577785deeca8217d7c4ea0c57f8b,PodSandboxId:bf94bc53a7906eae3248e9b3b2db1e38b51f22b137bbe901ac69fad537809976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981479699749768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdf8b23818828a0bc428856036f48db,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c6fa36d08719c8a8598419b48e2cd83da68ad7d302a6718af8e3e307a9f45b3,PodSandboxId:27c14bf7596dfc9a88c459447eb1ad301294f2e3666067b1b35ef2006354bdad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981479733238941,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebaf93444f262ea2bed52b8eba552037,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54806d35c5f22b1c741d1dd6043e05c5239232b94bca0c86396df38d012d7b9,PodSandboxId:c4280c0ae131c5410c473c298431e588f505e883dc6bcebb0b840b014e2f80b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981479671575387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21ec5ff154ba410ffb2394af4c71ce440577fd90efab449350801104cdbaff,PodSandboxId:a2fc9b3ed040132add530fa4a820303b64919c0486c53ff584a49a17a68f4360,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981194814101574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d2a418a-79a3-4658-b6cc-79c4342cfc22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.299000383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4f7caba-f365-435c-8560-2e0c7640c1ae name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.299076158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4f7caba-f365-435c-8560-2e0c7640c1ae name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.299942049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=566f3c8a-97d2-48b1-9d0f-00b3af07d467 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.300370733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805300352594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=566f3c8a-97d2-48b1-9d0f-00b3af07d467 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.300815609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbdc5b83-22b8-476d-9fc1-95648b64b053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.300877936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbdc5b83-22b8-476d-9fc1-95648b64b053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.301106129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf,PodSandboxId:d221910ccdbdc176447eb0056caeb3055c79ab233f791aaa262a0047ee6540e7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982767399335295,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-jxz8s,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e39f19cc-1743-41e1-9b00-b0de4c5d5748,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073593dc64ed8c4f2e4e156fadb3c31f230a3fe7df8b0a5d2ec26688f611f453,PodSandboxId:1630f6d022340e1396336d344ca48e08578c33d62bf8b3d78ede287b4eafd10a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981505317375314,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gtmkz,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 9e31ab1f-1c0c-452b-933f-967256168b85,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae7bdcd82dc1daf0185bde5b83e714ec34737442d44431e4376ccddf25a4330,PodSandboxId:caec2fe7d55db7669aa8bc1b02375e4ccedece5e41c7ce2f0e02095eddf52049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981491545629921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23604bf-cd27-429a-8b5b-f5a6f6de713d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba093a4071d8a1ab8f118b59a2aecbae9887b7ef9a2d784d616f7053597a718,PodSandboxId:53d2ded83601b40c4581dd782164486eee00a30520b84b6a0865a7d07ccac5e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491390447015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tn2kk,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58897b28-3c69-4c27-bbb5-c5f40f29fc79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b6451c603b9f3edbb7d999f9aeb109191302506e8cc57d639d012442d7a9c5,PodSandboxId:52225a404955c7ea2ceb98f48cc03531c38d02e8d3d81f6c8c19ce48639c11b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491331766995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqbf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151241be-6a72-4400-b65e-8ce91d8b7778,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1234830a6fad276435ac909ed85a893786939862c38b618f68158573636482d,PodSandboxId:f9e867e77da7418197acf0476b0a86b3d55370d7e8b2244ec3eb052ee695a65b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981490407843422,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sms7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eac7f36-acf3-4d10-b37a-a8fb1d46b787,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c21f596c10ac31d7c0256b2dbfd0dba7a3e792b6d4e299f883ca5196c8ce56,PodSandboxId:f6d6255a1c3fa42cd360a090144e154a9b2d45c829e88570b3e6ad110ac1f8d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981479706152147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0020e19914873dd683a4e748f4887eec,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85452a24439b4b5d2c041ac84e66cbc2b001577785deeca8217d7c4ea0c57f8b,PodSandboxId:bf94bc53a7906eae3248e9b3b2db1e38b51f22b137bbe901ac69fad537809976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981479699749768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdf8b23818828a0bc428856036f48db,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c6fa36d08719c8a8598419b48e2cd83da68ad7d302a6718af8e3e307a9f45b3,PodSandboxId:27c14bf7596dfc9a88c459447eb1ad301294f2e3666067b1b35ef2006354bdad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981479733238941,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebaf93444f262ea2bed52b8eba552037,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54806d35c5f22b1c741d1dd6043e05c5239232b94bca0c86396df38d012d7b9,PodSandboxId:c4280c0ae131c5410c473c298431e588f505e883dc6bcebb0b840b014e2f80b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981479671575387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21ec5ff154ba410ffb2394af4c71ce440577fd90efab449350801104cdbaff,PodSandboxId:a2fc9b3ed040132add530fa4a820303b64919c0486c53ff584a49a17a68f4360,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981194814101574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbdc5b83-22b8-476d-9fc1-95648b64b053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.332680279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fe55043-3297-4dd3-b11e-1f23fca531c8 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.332746750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fe55043-3297-4dd3-b11e-1f23fca531c8 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.333930083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=812fb0b5-4403-4807-80b8-ac5dde16c03f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.334579433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805334515961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=812fb0b5-4403-4807-80b8-ac5dde16c03f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.335094211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20d0365b-deec-4656-9d4f-b166fbd532c5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.335146028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20d0365b-deec-4656-9d4f-b166fbd532c5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:00:05 default-k8s-diff-port-485564 crio[727]: time="2025-01-27 13:00:05.335368439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf,PodSandboxId:d221910ccdbdc176447eb0056caeb3055c79ab233f791aaa262a0047ee6540e7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737982767399335295,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-jxz8s,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e39f19cc-1743-41e1-9b00-b0de4c5d5748,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073593dc64ed8c4f2e4e156fadb3c31f230a3fe7df8b0a5d2ec26688f611f453,PodSandboxId:1630f6d022340e1396336d344ca48e08578c33d62bf8b3d78ede287b4eafd10a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737981505317375314,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gtmkz,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 9e31ab1f-1c0c-452b-933f-967256168b85,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae7bdcd82dc1daf0185bde5b83e714ec34737442d44431e4376ccddf25a4330,PodSandboxId:caec2fe7d55db7669aa8bc1b02375e4ccedece5e41c7ce2f0e02095eddf52049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737981491545629921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23604bf-cd27-429a-8b5b-f5a6f6de713d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba093a4071d8a1ab8f118b59a2aecbae9887b7ef9a2d784d616f7053597a718,PodSandboxId:53d2ded83601b40c4581dd782164486eee00a30520b84b6a0865a7d07ccac5e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491390447015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tn2kk,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58897b28-3c69-4c27-bbb5-c5f40f29fc79,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b6451c603b9f3edbb7d999f9aeb109191302506e8cc57d639d012442d7a9c5,PodSandboxId:52225a404955c7ea2ceb98f48cc03531c38d02e8d3d81f6c8c19ce48639c11b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737981491331766995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqbf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151241be-6a72-4400-b65e-8ce91d8b7778,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1234830a6fad276435ac909ed85a893786939862c38b618f68158573636482d,PodSandboxId:f9e867e77da7418197acf0476b0a86b3d55370d7e8b2244ec3eb052ee695a65b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737981490407843422,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sms7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eac7f36-acf3-4d10-b37a-a8fb1d46b787,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c21f596c10ac31d7c0256b2dbfd0dba7a3e792b6d4e299f883ca5196c8ce56,PodSandboxId:f6d6255a1c3fa42cd360a090144e154a9b2d45c829e88570b3e6ad110ac1f8d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737981479706152147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0020e19914873dd683a4e748f4887eec,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85452a24439b4b5d2c041ac84e66cbc2b001577785deeca8217d7c4ea0c57f8b,PodSandboxId:bf94bc53a7906eae3248e9b3b2db1e38b51f22b137bbe901ac69fad537809976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737981479699749768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdf8b23818828a0bc428856036f48db,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c6fa36d08719c8a8598419b48e2cd83da68ad7d302a6718af8e3e307a9f45b3,PodSandboxId:27c14bf7596dfc9a88c459447eb1ad301294f2e3666067b1b35ef2006354bdad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737981479733238941,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebaf93444f262ea2bed52b8eba552037,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54806d35c5f22b1c741d1dd6043e05c5239232b94bca0c86396df38d012d7b9,PodSandboxId:c4280c0ae131c5410c473c298431e588f505e883dc6bcebb0b840b014e2f80b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737981479671575387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21ec5ff154ba410ffb2394af4c71ce440577fd90efab449350801104cdbaff,PodSandboxId:a2fc9b3ed040132add530fa4a820303b64919c0486c53ff584a49a17a68f4360,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737981194814101574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-485564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed8668e258154534df5a89421798a633,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20d0365b-deec-4656-9d4f-b166fbd532c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bd30612437d7e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           38 seconds ago      Exited              dashboard-metrics-scraper   9                   d221910ccdbdc       dashboard-metrics-scraper-86c6bf9756-jxz8s
	073593dc64ed8       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   1630f6d022340       kubernetes-dashboard-7779f9b69b-gtmkz
	eae7bdcd82dc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   caec2fe7d55db       storage-provisioner
	7ba093a4071d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   53d2ded83601b       coredns-668d6bf9bc-tn2kk
	c2b6451c603b9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   52225a404955c       coredns-668d6bf9bc-sqbf8
	a1234830a6fad       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   f9e867e77da74       kube-proxy-sms7c
	1c6fa36d08719       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           22 minutes ago      Running             etcd                        2                   27c14bf7596df       etcd-default-k8s-diff-port-485564
	a2c21f596c10a       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           22 minutes ago      Running             kube-controller-manager     2                   f6d6255a1c3fa       kube-controller-manager-default-k8s-diff-port-485564
	85452a24439b4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           22 minutes ago      Running             kube-scheduler              2                   bf94bc53a7906       kube-scheduler-default-k8s-diff-port-485564
	c54806d35c5f2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           22 minutes ago      Running             kube-apiserver              2                   c4280c0ae131c       kube-apiserver-default-k8s-diff-port-485564
	ad21ec5ff154b       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   a2fc9b3ed0401       kube-apiserver-default-k8s-diff-port-485564
	
	
	==> coredns [7ba093a4071d8a1ab8f118b59a2aecbae9887b7ef9a2d784d616f7053597a718] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c2b6451c603b9f3edbb7d999f9aeb109191302506e8cc57d639d012442d7a9c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-485564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-485564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=default-k8s-diff-port-485564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_38_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-485564
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:00:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:58:49 +0000   Mon, 27 Jan 2025 12:38:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:58:49 +0000   Mon, 27 Jan 2025 12:38:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:58:49 +0000   Mon, 27 Jan 2025 12:38:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:58:49 +0000   Mon, 27 Jan 2025 12:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.190
	  Hostname:    default-k8s-diff-port-485564
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3edb205d32ab428b8ff9a2544e65857d
	  System UUID:                3edb205d-32ab-428b-8ff9-a2544e65857d
	  Boot ID:                    8500c89f-21d0-4ed9-a3ff-2186454175b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-sqbf8                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-tn2kk                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-485564                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-485564             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-485564    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-sms7c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-485564             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-x9qcz                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-jxz8s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-gtmkz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-485564 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-485564 event: Registered Node default-k8s-diff-port-485564 in Controller
	
	
	==> dmesg <==
	[  +2.013271] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.545100] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 12:33] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.058556] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059204] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.169983] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.139522] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.283856] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +3.872083] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +2.123023] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.059302] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.663717] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.591778] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.319848] kauditd_printk_skb: 31 callbacks suppressed
	[Jan27 12:37] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.923634] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[Jan27 12:38] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.012768] systemd-fstab-generator[2978]: Ignoring "noauto" option for root device
	[  +4.339194] systemd-fstab-generator[3082]: Ignoring "noauto" option for root device
	[  +0.113417] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.028166] kauditd_printk_skb: 105 callbacks suppressed
	[  +5.014112] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.763236] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1c6fa36d08719c8a8598419b48e2cd83da68ad7d302a6718af8e3e307a9f45b3] <==
	{"level":"warn","ts":"2025-01-27T12:47:34.350488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.887841ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1276933942641720610 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.190\" mod_revision:1112 > success:<request_put:<key:\"/registry/masterleases/192.168.61.190\" value_size:67 lease:1276933942641720607 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.190\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T12:47:34.351097Z","caller":"traceutil/trace.go:171","msg":"trace[2069723024] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"275.267767ms","start":"2025-01-27T12:47:34.075797Z","end":"2025-01-27T12:47:34.351065Z","steps":["trace[2069723024] 'process raft request'  (duration: 126.547521ms)","trace[2069723024] 'compare'  (duration: 147.684175ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:47:34.741756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.37597ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:47:34.742597Z","caller":"traceutil/trace.go:171","msg":"trace[626370676] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1120; }","duration":"105.256803ms","start":"2025-01-27T12:47:34.637334Z","end":"2025-01-27T12:47:34.742591Z","steps":["trace[626370676] 'range keys from in-memory index tree'  (duration: 104.361105ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:47:34.742602Z","caller":"traceutil/trace.go:171","msg":"trace[1445570446] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"291.687243ms","start":"2025-01-27T12:47:34.450897Z","end":"2025-01-27T12:47:34.742584Z","steps":["trace[1445570446] 'process raft request'  (duration: 291.336625ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:47:34.742564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.535399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:47:34.743100Z","caller":"traceutil/trace.go:171","msg":"trace[421445185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1121; }","duration":"278.129088ms","start":"2025-01-27T12:47:34.464961Z","end":"2025-01-27T12:47:34.743090Z","steps":["trace[421445185] 'agreement among raft nodes before linearized reading'  (duration: 277.537221ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:47:34.742358Z","caller":"traceutil/trace.go:171","msg":"trace[592892193] linearizableReadLoop","detail":"{readStateIndex:1251; appliedIndex:1250; }","duration":"277.345064ms","start":"2025-01-27T12:47:34.465001Z","end":"2025-01-27T12:47:34.742346Z","steps":["trace[592892193] 'read index received'  (duration: 276.908874ms)","trace[592892193] 'applied index is now lower than readState.Index'  (duration: 435.609µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:48:00.807978Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":892}
	{"level":"info","ts":"2025-01-27T12:48:00.836258Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":892,"took":"27.7905ms","hash":3783900139,"current-db-size-bytes":2908160,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2908160,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T12:48:00.836348Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3783900139,"revision":892,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:49:13.387430Z","caller":"traceutil/trace.go:171","msg":"trace[763835757] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"159.173065ms","start":"2025-01-27T12:49:13.228236Z","end":"2025-01-27T12:49:13.387409Z","steps":["trace[763835757] 'process raft request'  (duration: 159.074734ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:49:13.387843Z","caller":"traceutil/trace.go:171","msg":"trace[606824253] linearizableReadLoop","detail":"{readStateIndex:1361; appliedIndex:1361; }","duration":"126.49383ms","start":"2025-01-27T12:49:13.261329Z","end":"2025-01-27T12:49:13.387823Z","steps":["trace[606824253] 'read index received'  (duration: 125.897513ms)","trace[606824253] 'applied index is now lower than readState.Index'  (duration: 594.319µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:49:13.388057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.678539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:49:13.388105Z","caller":"traceutil/trace.go:171","msg":"trace[334584187] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1210; }","duration":"126.801867ms","start":"2025-01-27T12:49:13.261292Z","end":"2025-01-27T12:49:13.388094Z","steps":["trace[334584187] 'agreement among raft nodes before linearized reading'  (duration: 126.653291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:49:14.351426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.625684ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1276933942641721618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.190\" mod_revision:1198 > success:<request_put:<key:\"/registry/masterleases/192.168.61.190\" value_size:67 lease:1276933942641721615 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.190\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T12:49:14.351873Z","caller":"traceutil/trace.go:171","msg":"trace[817237334] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"275.970436ms","start":"2025-01-27T12:49:14.075881Z","end":"2025-01-27T12:49:14.351852Z","steps":["trace[817237334] 'process raft request'  (duration: 127.829476ms)","trace[817237334] 'compare'  (duration: 147.467163ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:49:14.591750Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.512273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:49:14.591840Z","caller":"traceutil/trace.go:171","msg":"trace[468374089] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1211; }","duration":"130.665361ms","start":"2025-01-27T12:49:14.461160Z","end":"2025-01-27T12:49:14.591825Z","steps":["trace[468374089] 'range keys from in-memory index tree'  (duration: 130.468021ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:53:00.817718Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1142}
	{"level":"info","ts":"2025-01-27T12:53:00.823154Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1142,"took":"4.98473ms","hash":3130266708,"current-db-size-bytes":2908160,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1769472,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:53:00.823208Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3130266708,"revision":1142,"compact-revision":892}
	{"level":"info","ts":"2025-01-27T12:58:00.825860Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1393}
	{"level":"info","ts":"2025-01-27T12:58:00.830434Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1393,"took":"3.645304ms","hash":1437366553,"current-db-size-bytes":2908160,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:58:00.830560Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1437366553,"revision":1393,"compact-revision":1142}
	
	
	==> kernel <==
	 13:00:05 up 27 min,  0 users,  load average: 0.26, 0.34, 0.28
	Linux default-k8s-diff-port-485564 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ad21ec5ff154ba410ffb2394af4c71ce440577fd90efab449350801104cdbaff] <==
	W0127 12:37:54.683153       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.706819       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.743293       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.746703       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.750986       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.840792       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.919012       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.925665       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.942865       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.951412       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.952765       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.992507       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:54.997992       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.026731       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.040114       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.042409       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.043738       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.094113       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.155902       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.159369       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.183852       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.201305       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.231237       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.438095       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 12:37:55.567111       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c54806d35c5f22b1c741d1dd6043e05c5239232b94bca0c86396df38d012d7b9] <==
	I0127 12:56:03.242904       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:56:03.242944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:58:02.239090       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:58:02.239267       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:58:03.241451       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 12:58:03.241467       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:58:03.241623       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 12:58:03.241701       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:58:03.242984       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:58:03.243054       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:59:03.243441       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:59:03.243502       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:59:03.243618       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:59:03.243725       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:59:03.244626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:59:03.245708       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a2c21f596c10ac31d7c0256b2dbfd0dba7a3e792b6d4e299f883ca5196c8ce56] <==
	E0127 12:55:09.027390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:09.066617       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:39.033792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:39.073800       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:09.040467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:09.081197       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:39.046427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:39.087481       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:57:09.052055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:09.093502       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:57:39.058497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:39.100843       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:58:09.065156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:58:09.108249       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:58:39.071447       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:58:39.114384       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:58:49.861187       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-485564"
	E0127 12:59:09.077702       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:59:09.122058       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:59:13.397720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="127.319µs"
	I0127 12:59:27.631871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="67.882µs"
	I0127 12:59:28.394916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="108.36µs"
	I0127 12:59:32.149424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="66.738µs"
	E0127 12:59:39.084710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:59:39.129921       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a1234830a6fad276435ac909ed85a893786939862c38b618f68158573636482d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:38:10.797700       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:38:10.810733       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.190"]
	E0127 12:38:10.810798       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:38:10.932747       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:38:10.932854       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:38:10.932896       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:38:10.952474       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:38:10.961882       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:38:10.961907       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:38:10.977772       1 config.go:199] "Starting service config controller"
	I0127 12:38:10.981374       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:38:10.981642       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:38:10.981720       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:38:10.983706       1 config.go:329] "Starting node config controller"
	I0127 12:38:10.984700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:38:11.082657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:38:11.083623       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:38:11.085653       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [85452a24439b4b5d2c041ac84e66cbc2b001577785deeca8217d7c4ea0c57f8b] <==
	W0127 12:38:03.061049       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:38:03.061105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.098614       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:38:03.098677       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.099031       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:38:03.099087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.277122       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:38:03.277210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.303945       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:38:03.303995       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.313658       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:38:03.313701       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.321983       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:38:03.322134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.479476       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:38:03.480134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.480426       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:38:03.480791       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.481061       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:38:03.481153       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.485733       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:38:03.485828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:38:03.704823       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:38:03.704969       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:38:05.346161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:59:27 default-k8s-diff-port-485564 kubelet[2985]: I0127 12:59:27.614105    2985 scope.go:117] "RemoveContainer" containerID="a14a80b3207041ac48673bd4a650cc9e63aedf7e547fae24dc7078bfeee4ab08"
	Jan 27 12:59:27 default-k8s-diff-port-485564 kubelet[2985]: I0127 12:59:27.614488    2985 scope.go:117] "RemoveContainer" containerID="bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf"
	Jan 27 12:59:27 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:27.614704    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-jxz8s_kubernetes-dashboard(e39f19cc-1743-41e1-9b00-b0de4c5d5748)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-jxz8s" podUID="e39f19cc-1743-41e1-9b00-b0de4c5d5748"
	Jan 27 12:59:28 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:28.381745    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-x9qcz" podUID="a29b3256-0775-4c65-b7fb-706574cf8487"
	Jan 27 12:59:32 default-k8s-diff-port-485564 kubelet[2985]: I0127 12:59:32.130272    2985 scope.go:117] "RemoveContainer" containerID="bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf"
	Jan 27 12:59:32 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:32.131060    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-jxz8s_kubernetes-dashboard(e39f19cc-1743-41e1-9b00-b0de4c5d5748)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-jxz8s" podUID="e39f19cc-1743-41e1-9b00-b0de4c5d5748"
	Jan 27 12:59:35 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:35.806778    2985 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982775806241482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:35 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:35.807117    2985 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982775806241482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:41 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:41.381924    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-x9qcz" podUID="a29b3256-0775-4c65-b7fb-706574cf8487"
	Jan 27 12:59:43 default-k8s-diff-port-485564 kubelet[2985]: I0127 12:59:43.380944    2985 scope.go:117] "RemoveContainer" containerID="bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf"
	Jan 27 12:59:43 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:43.381125    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-jxz8s_kubernetes-dashboard(e39f19cc-1743-41e1-9b00-b0de4c5d5748)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-jxz8s" podUID="e39f19cc-1743-41e1-9b00-b0de4c5d5748"
	Jan 27 12:59:45 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:45.809660    2985 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982785809075238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:45 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:45.809986    2985 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982785809075238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:54 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:54.382021    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-x9qcz" podUID="a29b3256-0775-4c65-b7fb-706574cf8487"
	Jan 27 12:59:55 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:55.811660    2985 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982795811186496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:55 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:55.811695    2985 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982795811186496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:59:58 default-k8s-diff-port-485564 kubelet[2985]: I0127 12:59:58.380590    2985 scope.go:117] "RemoveContainer" containerID="bd30612437d7ea1252c9b7a36060b3c2a46ca25e7fa5e6b98b4065dfd2ff77cf"
	Jan 27 12:59:58 default-k8s-diff-port-485564 kubelet[2985]: E0127 12:59:58.380780    2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-jxz8s_kubernetes-dashboard(e39f19cc-1743-41e1-9b00-b0de4c5d5748)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-jxz8s" podUID="e39f19cc-1743-41e1-9b00-b0de4c5d5748"
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]: E0127 13:00:05.403043    2985 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]: E0127 13:00:05.814358    2985 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805814083415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:00:05 default-k8s-diff-port-485564 kubelet[2985]: E0127 13:00:05.814397    2985 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982805814083415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [073593dc64ed8c4f2e4e156fadb3c31f230a3fe7df8b0a5d2ec26688f611f453] <==
	2025/01/27 12:47:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:58:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:58:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:59:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:59:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [eae7bdcd82dc1daf0185bde5b83e714ec34737442d44431e4376ccddf25a4330] <==
	I0127 12:38:11.831743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:38:11.856610       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:38:11.856755       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:38:11.872796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:38:11.872950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-485564_d29a2c4e-fccb-4415-bbb8-dab6cbea4fff!
	I0127 12:38:11.873951       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8eae06c1-e241-4f43-936a-58f01b5b60a2", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-485564_d29a2c4e-fccb-4415-bbb8-dab6cbea4fff became leader
	I0127 12:38:11.973358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-485564_d29a2c4e-fccb-4415-bbb8-dab6cbea4fff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-485564 -n default-k8s-diff-port-485564
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-485564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-x9qcz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-485564 describe pod metrics-server-f79f97bbb-x9qcz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-485564 describe pod metrics-server-f79f97bbb-x9qcz: exit status 1 (58.573028ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-x9qcz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-485564 describe pod metrics-server-f79f97bbb-x9qcz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1640.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 12:35:07.001952 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:36.327186 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m30.484855005s)

                                                
                                                
-- stdout --
	* [old-k8s-version-488586] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-488586" primary control-plane node in "old-k8s-version-488586" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-488586" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:33:30.578081 1775552 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:33:30.578201 1775552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:30.578214 1775552 out.go:358] Setting ErrFile to fd 2...
	I0127 12:33:30.578219 1775552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:30.578376 1775552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:33:30.579012 1775552 out.go:352] Setting JSON to false
	I0127 12:33:30.580097 1775552 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33352,"bootTime":1737947859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:33:30.580203 1775552 start.go:139] virtualization: kvm guest
	I0127 12:33:30.582216 1775552 out.go:177] * [old-k8s-version-488586] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:33:30.583416 1775552 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:33:30.583424 1775552 notify.go:220] Checking for updates...
	I0127 12:33:30.585734 1775552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:33:30.586935 1775552 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:33:30.588048 1775552 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:33:30.589196 1775552 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:33:30.590266 1775552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:33:30.591722 1775552 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:33:30.592272 1775552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:33:30.592323 1775552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:33:30.608116 1775552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0127 12:33:30.608589 1775552 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:33:30.609152 1775552 main.go:141] libmachine: Using API Version  1
	I0127 12:33:30.609195 1775552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:33:30.609590 1775552 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:33:30.609819 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:30.611503 1775552 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 12:33:30.612732 1775552 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:33:30.613034 1775552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:33:30.613075 1775552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:33:30.627920 1775552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0127 12:33:30.628379 1775552 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:33:30.628857 1775552 main.go:141] libmachine: Using API Version  1
	I0127 12:33:30.628877 1775552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:33:30.629170 1775552 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:33:30.629360 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:30.665151 1775552 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:33:30.666381 1775552 start.go:297] selected driver: kvm2
	I0127 12:33:30.666397 1775552 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
88586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:30.666495 1775552 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:33:30.667226 1775552 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:33:30.667303 1775552 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:33:30.682961 1775552 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:33:30.683354 1775552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:33:30.683387 1775552 cni.go:84] Creating CNI manager for ""
	I0127 12:33:30.683451 1775552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:33:30.683518 1775552 start.go:340] cluster config:
	{Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:30.683653 1775552 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:33:30.685484 1775552 out.go:177] * Starting "old-k8s-version-488586" primary control-plane node in "old-k8s-version-488586" cluster
	I0127 12:33:30.686658 1775552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:33:30.686697 1775552 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 12:33:30.686707 1775552 cache.go:56] Caching tarball of preloaded images
	I0127 12:33:30.686817 1775552 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:33:30.686839 1775552 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 12:33:30.686932 1775552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/config.json ...
	I0127 12:33:30.687109 1775552 start.go:360] acquireMachinesLock for old-k8s-version-488586: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:33:30.687148 1775552 start.go:364] duration metric: took 21.897µs to acquireMachinesLock for "old-k8s-version-488586"
	I0127 12:33:30.687162 1775552 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:33:30.687170 1775552 fix.go:54] fixHost starting: 
	I0127 12:33:30.687493 1775552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:33:30.687538 1775552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:33:30.702519 1775552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0127 12:33:30.702960 1775552 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:33:30.703405 1775552 main.go:141] libmachine: Using API Version  1
	I0127 12:33:30.703428 1775552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:33:30.703771 1775552 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:33:30.704019 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:30.704248 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetState
	I0127 12:33:30.705867 1775552 fix.go:112] recreateIfNeeded on old-k8s-version-488586: state=Stopped err=<nil>
	I0127 12:33:30.705887 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	W0127 12:33:30.706044 1775552 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:33:30.707664 1775552 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-488586" ...
	I0127 12:33:30.708634 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .Start
	I0127 12:33:30.708810 1775552 main.go:141] libmachine: (old-k8s-version-488586) starting domain...
	I0127 12:33:30.708830 1775552 main.go:141] libmachine: (old-k8s-version-488586) ensuring networks are active...
	I0127 12:33:30.709550 1775552 main.go:141] libmachine: (old-k8s-version-488586) Ensuring network default is active
	I0127 12:33:30.709979 1775552 main.go:141] libmachine: (old-k8s-version-488586) Ensuring network mk-old-k8s-version-488586 is active
	I0127 12:33:30.710333 1775552 main.go:141] libmachine: (old-k8s-version-488586) getting domain XML...
	I0127 12:33:30.711027 1775552 main.go:141] libmachine: (old-k8s-version-488586) creating domain...
	I0127 12:33:31.960170 1775552 main.go:141] libmachine: (old-k8s-version-488586) waiting for IP...
	I0127 12:33:31.961052 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:31.961489 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:31.961604 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:31.961481 1775587 retry.go:31] will retry after 279.916412ms: waiting for domain to come up
	I0127 12:33:32.243225 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:32.243785 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:32.243830 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:32.243770 1775587 retry.go:31] will retry after 355.667852ms: waiting for domain to come up
	I0127 12:33:32.601564 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:32.602026 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:32.602096 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:32.602040 1775587 retry.go:31] will retry after 428.843468ms: waiting for domain to come up
	I0127 12:33:33.032816 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:33.033303 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:33.033335 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:33.033264 1775587 retry.go:31] will retry after 422.321206ms: waiting for domain to come up
	I0127 12:33:33.457032 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:33.457676 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:33.457707 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:33.457662 1775587 retry.go:31] will retry after 597.437426ms: waiting for domain to come up
	I0127 12:33:34.056355 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:34.056825 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:34.056866 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:34.056749 1775587 retry.go:31] will retry after 635.678106ms: waiting for domain to come up
	I0127 12:33:34.693631 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:34.694183 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:34.694212 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:34.694143 1775587 retry.go:31] will retry after 867.346022ms: waiting for domain to come up
	I0127 12:33:35.563116 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:35.563584 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:35.563610 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:35.563555 1775587 retry.go:31] will retry after 1.152087885s: waiting for domain to come up
	I0127 12:33:36.717351 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:36.717799 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:36.717831 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:36.717740 1775587 retry.go:31] will retry after 1.644687537s: waiting for domain to come up
	I0127 12:33:38.363922 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:38.364338 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:38.364398 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:38.364328 1775587 retry.go:31] will retry after 1.631015867s: waiting for domain to come up
	I0127 12:33:39.996678 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:39.997292 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:39.997331 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:39.997232 1775587 retry.go:31] will retry after 2.371858128s: waiting for domain to come up
	I0127 12:33:42.371719 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:42.372240 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:42.372271 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:42.372201 1775587 retry.go:31] will retry after 2.985079344s: waiting for domain to come up
	I0127 12:33:45.359769 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:45.360292 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | unable to find current IP address of domain old-k8s-version-488586 in network mk-old-k8s-version-488586
	I0127 12:33:45.360315 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | I0127 12:33:45.360257 1775587 retry.go:31] will retry after 3.824987316s: waiting for domain to come up
	I0127 12:33:49.186367 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.186956 1775552 main.go:141] libmachine: (old-k8s-version-488586) found domain IP: 192.168.39.109
	I0127 12:33:49.186976 1775552 main.go:141] libmachine: (old-k8s-version-488586) reserving static IP address...
	I0127 12:33:49.186985 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has current primary IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.187503 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "old-k8s-version-488586", mac: "52:54:00:ec:6f:18", ip: "192.168.39.109"} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.187537 1775552 main.go:141] libmachine: (old-k8s-version-488586) reserved static IP address 192.168.39.109 for domain old-k8s-version-488586
	I0127 12:33:49.187560 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | skip adding static IP to network mk-old-k8s-version-488586 - found existing host DHCP lease matching {name: "old-k8s-version-488586", mac: "52:54:00:ec:6f:18", ip: "192.168.39.109"}
	I0127 12:33:49.187586 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | Getting to WaitForSSH function...
	I0127 12:33:49.187603 1775552 main.go:141] libmachine: (old-k8s-version-488586) waiting for SSH...
	I0127 12:33:49.189803 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.190216 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.190252 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.190370 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | Using SSH client type: external
	I0127 12:33:49.190402 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa (-rw-------)
	I0127 12:33:49.190424 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:33:49.190433 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | About to run SSH command:
	I0127 12:33:49.190441 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | exit 0
	I0127 12:33:49.319466 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | SSH cmd err, output: <nil>: 
	I0127 12:33:49.319844 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetConfigRaw
	I0127 12:33:49.320463 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:33:49.322968 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.323315 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.323344 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.323658 1775552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/config.json ...
	I0127 12:33:49.323907 1775552 machine.go:93] provisionDockerMachine start ...
	I0127 12:33:49.323938 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:49.324215 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.326501 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.326815 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.326861 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.327048 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:49.327259 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.327429 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.327566 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:49.327740 1775552 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:49.327974 1775552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:33:49.327991 1775552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:33:49.435219 1775552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:33:49.435249 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:33:49.435524 1775552 buildroot.go:166] provisioning hostname "old-k8s-version-488586"
	I0127 12:33:49.435553 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:33:49.435767 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.438769 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.439155 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.439191 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.439303 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:49.439500 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.439619 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.439794 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:49.439985 1775552 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:49.440177 1775552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:33:49.440193 1775552 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-488586 && echo "old-k8s-version-488586" | sudo tee /etc/hostname
	I0127 12:33:49.565194 1775552 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-488586
	
	I0127 12:33:49.565238 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.567946 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.568318 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.568338 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.568558 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:49.568714 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.568891 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.569022 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:49.569145 1775552 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:49.569347 1775552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:33:49.569366 1775552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-488586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-488586/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-488586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:33:49.692365 1775552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:33:49.692403 1775552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:33:49.692452 1775552 buildroot.go:174] setting up certificates
	I0127 12:33:49.692475 1775552 provision.go:84] configureAuth start
	I0127 12:33:49.692493 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetMachineName
	I0127 12:33:49.692769 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:33:49.696099 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.696454 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.696483 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.696642 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.699415 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.699757 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.699791 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.699931 1775552 provision.go:143] copyHostCerts
	I0127 12:33:49.699988 1775552 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:33:49.699998 1775552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:33:49.700056 1775552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:33:49.700137 1775552 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:33:49.700145 1775552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:33:49.700168 1775552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:33:49.700226 1775552 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:33:49.700234 1775552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:33:49.700253 1775552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:33:49.700298 1775552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-488586 san=[127.0.0.1 192.168.39.109 localhost minikube old-k8s-version-488586]
	I0127 12:33:49.752855 1775552 provision.go:177] copyRemoteCerts
	I0127 12:33:49.752920 1775552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:33:49.752956 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.755434 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.755762 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.755803 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.755943 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:49.756143 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.756293 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:49.756412 1775552 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:33:49.844395 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 12:33:49.868820 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:33:49.891149 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:33:49.912574 1775552 provision.go:87] duration metric: took 220.086555ms to configureAuth
	I0127 12:33:49.912611 1775552 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:33:49.912809 1775552 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:33:49.912991 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:49.915526 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.915944 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:49.915972 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:49.916201 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:49.916385 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.916600 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:49.916733 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:49.916902 1775552 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:49.917101 1775552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:33:49.917130 1775552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:33:50.144261 1775552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:33:50.144287 1775552 machine.go:96] duration metric: took 820.36297ms to provisionDockerMachine
	I0127 12:33:50.144305 1775552 start.go:293] postStartSetup for "old-k8s-version-488586" (driver="kvm2")
	I0127 12:33:50.144319 1775552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:33:50.144353 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:50.144739 1775552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:33:50.144774 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:50.147340 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.147685 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:50.147716 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.147881 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:50.148065 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:50.148245 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:50.148414 1775552 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:33:50.237707 1775552 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:33:50.241910 1775552 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:33:50.241935 1775552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:33:50.242004 1775552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:33:50.242083 1775552 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:33:50.242184 1775552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:33:50.252368 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:33:50.276096 1775552 start.go:296] duration metric: took 131.772605ms for postStartSetup
	I0127 12:33:50.276174 1775552 fix.go:56] duration metric: took 19.58900219s for fixHost
	I0127 12:33:50.276218 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:50.279048 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.279415 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:50.279443 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.279631 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:50.279848 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:50.280014 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:50.280185 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:50.280368 1775552 main.go:141] libmachine: Using SSH client type: native
	I0127 12:33:50.280585 1775552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0127 12:33:50.280596 1775552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:33:50.386894 1775552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981230.346773601
	
	I0127 12:33:50.386920 1775552 fix.go:216] guest clock: 1737981230.346773601
	I0127 12:33:50.386929 1775552 fix.go:229] Guest: 2025-01-27 12:33:50.346773601 +0000 UTC Remote: 2025-01-27 12:33:50.276181855 +0000 UTC m=+19.738538351 (delta=70.591746ms)
	I0127 12:33:50.386956 1775552 fix.go:200] guest clock delta is within tolerance: 70.591746ms
	I0127 12:33:50.386962 1775552 start.go:83] releasing machines lock for "old-k8s-version-488586", held for 19.699804787s
	I0127 12:33:50.386981 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:50.387244 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:33:50.389801 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.390120 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:50.390149 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.390286 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:50.390818 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:50.391007 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .DriverName
	I0127 12:33:50.391113 1775552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:33:50.391176 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:50.391235 1775552 ssh_runner.go:195] Run: cat /version.json
	I0127 12:33:50.391262 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHHostname
	I0127 12:33:50.393839 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.394078 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.394133 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:50.394157 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.394310 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:50.394476 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:50.394534 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:50.394569 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:50.394631 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:50.394737 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHPort
	I0127 12:33:50.394785 1775552 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:33:50.394907 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHKeyPath
	I0127 12:33:50.395061 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetSSHUsername
	I0127 12:33:50.395236 1775552 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/old-k8s-version-488586/id_rsa Username:docker}
	I0127 12:33:50.512993 1775552 ssh_runner.go:195] Run: systemctl --version
	I0127 12:33:50.518706 1775552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:33:50.661383 1775552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:33:50.667579 1775552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:33:50.667646 1775552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:33:50.682576 1775552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:33:50.682603 1775552 start.go:495] detecting cgroup driver to use...
	I0127 12:33:50.682677 1775552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:33:50.702289 1775552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:33:50.715870 1775552 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:33:50.715957 1775552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:33:50.729228 1775552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:33:50.742197 1775552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:33:50.855562 1775552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:33:51.015563 1775552 docker.go:233] disabling docker service ...
	I0127 12:33:51.015647 1775552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:33:51.029952 1775552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:33:51.042166 1775552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:33:51.178292 1775552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:33:51.292119 1775552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:33:51.304888 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:33:51.323115 1775552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 12:33:51.323184 1775552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:51.332873 1775552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:33:51.332942 1775552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:51.343595 1775552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:51.355556 1775552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:33:51.365145 1775552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:33:51.374766 1775552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:33:51.383441 1775552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:33:51.383487 1775552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:33:51.394796 1775552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:33:51.403811 1775552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:33:51.517435 1775552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:33:51.602151 1775552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:33:51.602232 1775552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:33:51.606714 1775552 start.go:563] Will wait 60s for crictl version
	I0127 12:33:51.606786 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:51.610189 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:33:51.653772 1775552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:33:51.653874 1775552 ssh_runner.go:195] Run: crio --version
	I0127 12:33:51.685294 1775552 ssh_runner.go:195] Run: crio --version
	I0127 12:33:51.716333 1775552 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 12:33:51.717391 1775552 main.go:141] libmachine: (old-k8s-version-488586) Calling .GetIP
	I0127 12:33:51.720138 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:51.720555 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:6f:18", ip: ""} in network mk-old-k8s-version-488586: {Iface:virbr4 ExpiryTime:2025-01-27 13:27:25 +0000 UTC Type:0 Mac:52:54:00:ec:6f:18 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:old-k8s-version-488586 Clientid:01:52:54:00:ec:6f:18}
	I0127 12:33:51.720588 1775552 main.go:141] libmachine: (old-k8s-version-488586) DBG | domain old-k8s-version-488586 has defined IP address 192.168.39.109 and MAC address 52:54:00:ec:6f:18 in network mk-old-k8s-version-488586
	I0127 12:33:51.720788 1775552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:33:51.724758 1775552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:33:51.736720 1775552 kubeadm.go:883] updating cluster {Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:33:51.736854 1775552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:33:51.736963 1775552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:33:51.776845 1775552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:33:51.776908 1775552 ssh_runner.go:195] Run: which lz4
	I0127 12:33:51.781025 1775552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:33:51.784945 1775552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:33:51.784979 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 12:33:53.200356 1775552 crio.go:462] duration metric: took 1.419383817s to copy over tarball
	I0127 12:33:53.200430 1775552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:33:56.033135 1775552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832669871s)
	I0127 12:33:56.033166 1775552 crio.go:469] duration metric: took 2.832781938s to extract the tarball
	I0127 12:33:56.033176 1775552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:33:56.074177 1775552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:33:56.113154 1775552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 12:33:56.113191 1775552 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 12:33:56.113301 1775552 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:33:56.113299 1775552 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.113381 1775552 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.113301 1775552 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 12:33:56.113433 1775552 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.113509 1775552 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.113361 1775552 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.113880 1775552 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.116150 1775552 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.116165 1775552 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.116150 1775552 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.116311 1775552 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:33:56.116497 1775552 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.116521 1775552 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.116525 1775552 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.117100 1775552 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 12:33:56.349735 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 12:33:56.372227 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.377221 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.380626 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.387122 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.389933 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.391573 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.405402 1775552 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 12:33:56.405446 1775552 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 12:33:56.405485 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.479550 1775552 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 12:33:56.479597 1775552 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.479645 1775552 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 12:33:56.479687 1775552 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.479651 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.479740 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.522112 1775552 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 12:33:56.522171 1775552 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.522189 1775552 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 12:33:56.522214 1775552 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 12:33:56.522224 1775552 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.522254 1775552 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.522262 1775552 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 12:33:56.522264 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.522278 1775552 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.522298 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.522335 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:33:56.522360 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.522302 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.522383 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.522220 1775552 ssh_runner.go:195] Run: which crictl
	I0127 12:33:56.586112 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.591695 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.591742 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.591753 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.591793 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.591818 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.591710 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:33:56.716921 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 12:33:56.744924 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 12:33:56.744945 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.748548 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.748615 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 12:33:56.748635 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.748788 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.832148 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 12:33:56.878449 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 12:33:56.885944 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 12:33:56.889618 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 12:33:56.889648 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 12:33:56.889702 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 12:33:56.889804 1775552 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 12:33:56.951528 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 12:33:56.974074 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 12:33:56.974107 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 12:33:56.974294 1775552 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 12:33:57.316177 1775552 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:33:57.458780 1775552 cache_images.go:92] duration metric: took 1.345567394s to LoadCachedImages
	W0127 12:33:57.458907 1775552 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0127 12:33:57.458924 1775552 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.20.0 crio true true} ...
	I0127 12:33:57.459032 1775552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-488586 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:33:57.459133 1775552 ssh_runner.go:195] Run: crio config
	I0127 12:33:57.502516 1775552 cni.go:84] Creating CNI manager for ""
	I0127 12:33:57.502563 1775552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:33:57.502579 1775552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:33:57.502620 1775552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-488586 NodeName:old-k8s-version-488586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 12:33:57.502827 1775552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-488586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:33:57.502907 1775552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 12:33:57.513318 1775552 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:33:57.513396 1775552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:33:57.522967 1775552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 12:33:57.538504 1775552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:33:57.553935 1775552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 12:33:57.570173 1775552 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0127 12:33:57.573675 1775552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:33:57.584482 1775552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:33:57.727181 1775552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:33:57.742490 1775552 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586 for IP: 192.168.39.109
	I0127 12:33:57.742514 1775552 certs.go:194] generating shared ca certs ...
	I0127 12:33:57.742555 1775552 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:33:57.742774 1775552 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:33:57.742847 1775552 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:33:57.742865 1775552 certs.go:256] generating profile certs ...
	I0127 12:33:57.743005 1775552 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/client.key
	I0127 12:33:57.794566 1775552 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key.1691d3b4
	I0127 12:33:57.794694 1775552 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key
	I0127 12:33:57.794874 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:33:57.794917 1775552 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:33:57.794930 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:33:57.794963 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:33:57.794994 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:33:57.795024 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:33:57.795078 1775552 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:33:57.795929 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:33:57.836794 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:33:57.873785 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:33:57.904081 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:33:57.936645 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 12:33:57.970387 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:33:57.993382 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:33:58.016586 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/old-k8s-version-488586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:33:58.039239 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:33:58.061594 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:33:58.084655 1775552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:33:58.109841 1775552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:33:58.125532 1775552 ssh_runner.go:195] Run: openssl version
	I0127 12:33:58.131007 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:33:58.141198 1775552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:58.145340 1775552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:58.145383 1775552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:33:58.151059 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:33:58.160854 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:33:58.171745 1775552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:33:58.175866 1775552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:33:58.175934 1775552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:33:58.181671 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:33:58.192733 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:33:58.204432 1775552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:33:58.208717 1775552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:33:58.208762 1775552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:33:58.213985 1775552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:33:58.223694 1775552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:33:58.227754 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:33:58.233169 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:33:58.238627 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:33:58.244577 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:33:58.249909 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:33:58.255723 1775552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:33:58.261325 1775552 kubeadm.go:392] StartCluster: {Name:old-k8s-version-488586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-488586 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:58.261434 1775552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:33:58.261486 1775552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:33:58.295547 1775552 cri.go:89] found id: ""
	I0127 12:33:58.295598 1775552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:33:58.304544 1775552 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:33:58.304562 1775552 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:33:58.304609 1775552 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:33:58.313159 1775552 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:33:58.314092 1775552 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-488586" does not appear in /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:33:58.314966 1775552 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-1724227/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-488586" cluster setting kubeconfig missing "old-k8s-version-488586" context setting]
	I0127 12:33:58.315898 1775552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:33:58.323870 1775552 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:33:58.332897 1775552 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.109
	I0127 12:33:58.332929 1775552 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:33:58.332944 1775552 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 12:33:58.332987 1775552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:33:58.365832 1775552 cri.go:89] found id: ""
	I0127 12:33:58.365889 1775552 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:33:58.381536 1775552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:33:58.390458 1775552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:33:58.390482 1775552 kubeadm.go:157] found existing configuration files:
	
	I0127 12:33:58.390534 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:33:58.399091 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:33:58.399153 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:33:58.408174 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:33:58.416791 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:33:58.416856 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:33:58.425180 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:33:58.433231 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:33:58.433290 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:33:58.441763 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:33:58.450538 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:33:58.450613 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:33:58.460458 1775552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:33:58.469363 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:58.599137 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:59.400167 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:59.623347 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:59.791548 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:33:59.871302 1775552 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:33:59.871397 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:00.371965 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:00.872401 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:01.372053 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:01.871948 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:02.371957 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:02.871968 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:03.371500 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:03.871677 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:04.371911 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:04.872008 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:05.371646 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:05.871774 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:06.371921 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:06.872243 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:07.371963 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:07.871949 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:08.371452 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:08.871951 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:09.372058 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:09.872471 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:10.371960 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:10.872077 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:11.371692 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:11.871957 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:12.371499 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:12.871557 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:13.371452 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:13.871695 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:14.372475 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:14.871565 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:15.372035 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:15.872200 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:16.371697 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:16.872374 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:17.372466 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:17.871673 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:18.372204 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:18.872166 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:19.371538 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:19.871994 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:20.371976 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:20.871859 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:21.371993 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:21.871973 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:22.371597 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:22.872402 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:23.371951 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:23.871675 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:24.371525 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:24.871632 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:25.371766 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:25.871948 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:26.371987 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:26.872326 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:27.371695 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:27.872001 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:28.372066 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:28.871889 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:29.372126 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:29.871966 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:30.371901 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:30.871996 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:31.371553 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:31.871983 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:32.371488 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:32.871846 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:33.371893 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:33.872482 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:34.371968 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:34.872439 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:35.372343 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:35.871783 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:36.372001 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:36.872024 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:37.372030 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:37.871640 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:38.372144 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:38.872302 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:39.371789 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:39.871582 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:40.371701 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:40.871614 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:41.371827 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:41.871945 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:42.371616 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:42.871693 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:43.371960 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:43.872487 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:44.371954 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:44.871704 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:45.371937 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:45.871816 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:46.372533 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:46.872407 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:47.371771 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:47.871501 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:48.372248 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:48.872170 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:49.372475 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:49.871954 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:50.372277 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:50.871473 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:51.371992 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:51.871956 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:52.372146 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:52.871976 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:53.371947 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:53.871818 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:54.371550 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:54.871564 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:55.371805 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:55.871574 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:56.372499 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:56.871605 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:57.371488 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:57.872243 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:58.372132 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:58.871970 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:59.371508 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:59.871671 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:34:59.871745 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:34:59.906257 1775552 cri.go:89] found id: ""
	I0127 12:34:59.906286 1775552 logs.go:282] 0 containers: []
	W0127 12:34:59.906296 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:34:59.906304 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:34:59.906369 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:34:59.937701 1775552 cri.go:89] found id: ""
	I0127 12:34:59.937726 1775552 logs.go:282] 0 containers: []
	W0127 12:34:59.937735 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:34:59.937741 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:34:59.937801 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:34:59.968770 1775552 cri.go:89] found id: ""
	I0127 12:34:59.968805 1775552 logs.go:282] 0 containers: []
	W0127 12:34:59.968816 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:34:59.968824 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:34:59.968898 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:00.001118 1775552 cri.go:89] found id: ""
	I0127 12:35:00.001151 1775552 logs.go:282] 0 containers: []
	W0127 12:35:00.001174 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:00.001191 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:00.001261 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:00.032992 1775552 cri.go:89] found id: ""
	I0127 12:35:00.033030 1775552 logs.go:282] 0 containers: []
	W0127 12:35:00.033041 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:00.033050 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:00.033104 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:00.064430 1775552 cri.go:89] found id: ""
	I0127 12:35:00.064460 1775552 logs.go:282] 0 containers: []
	W0127 12:35:00.064472 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:00.064480 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:00.064564 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:00.094804 1775552 cri.go:89] found id: ""
	I0127 12:35:00.094833 1775552 logs.go:282] 0 containers: []
	W0127 12:35:00.094843 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:00.094851 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:00.094922 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:00.128752 1775552 cri.go:89] found id: ""
	I0127 12:35:00.128776 1775552 logs.go:282] 0 containers: []
	W0127 12:35:00.128786 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:00.128801 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:00.128816 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:00.176471 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:00.176506 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:00.189232 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:00.189259 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:00.302762 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:00.302790 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:00.302817 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:00.377745 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:00.377777 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:02.914869 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:02.939904 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:02.939977 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:02.974412 1775552 cri.go:89] found id: ""
	I0127 12:35:02.974441 1775552 logs.go:282] 0 containers: []
	W0127 12:35:02.974451 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:02.974460 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:02.974518 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:03.009423 1775552 cri.go:89] found id: ""
	I0127 12:35:03.009459 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.009471 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:03.009500 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:03.009566 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:03.040664 1775552 cri.go:89] found id: ""
	I0127 12:35:03.040694 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.040703 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:03.040709 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:03.040769 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:03.077596 1775552 cri.go:89] found id: ""
	I0127 12:35:03.077630 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.077639 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:03.077646 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:03.077701 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:03.113303 1775552 cri.go:89] found id: ""
	I0127 12:35:03.113329 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.113337 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:03.113343 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:03.113423 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:03.149903 1775552 cri.go:89] found id: ""
	I0127 12:35:03.149931 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.149940 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:03.149946 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:03.149999 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:03.187227 1775552 cri.go:89] found id: ""
	I0127 12:35:03.187255 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.187263 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:03.187270 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:03.187320 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:03.218469 1775552 cri.go:89] found id: ""
	I0127 12:35:03.218503 1775552 logs.go:282] 0 containers: []
	W0127 12:35:03.218515 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:03.218529 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:03.218543 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:03.265615 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:03.265649 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:03.278224 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:03.278251 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:03.349722 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:03.349751 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:03.349769 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:03.426102 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:03.426151 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:05.967450 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:05.979801 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:05.979914 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:06.013489 1775552 cri.go:89] found id: ""
	I0127 12:35:06.013520 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.013529 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:06.013535 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:06.013606 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:06.047569 1775552 cri.go:89] found id: ""
	I0127 12:35:06.047595 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.047606 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:06.047613 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:06.047680 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:06.083615 1775552 cri.go:89] found id: ""
	I0127 12:35:06.083640 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.083647 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:06.083653 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:06.083705 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:06.116916 1775552 cri.go:89] found id: ""
	I0127 12:35:06.116941 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.116949 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:06.116955 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:06.117003 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:06.149344 1775552 cri.go:89] found id: ""
	I0127 12:35:06.149371 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.149379 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:06.149385 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:06.149444 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:06.181022 1775552 cri.go:89] found id: ""
	I0127 12:35:06.181052 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.181063 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:06.181071 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:06.181138 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:06.214575 1775552 cri.go:89] found id: ""
	I0127 12:35:06.214610 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.214622 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:06.214631 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:06.214702 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:06.246311 1775552 cri.go:89] found id: ""
	I0127 12:35:06.246341 1775552 logs.go:282] 0 containers: []
	W0127 12:35:06.246351 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:06.246363 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:06.246375 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:06.259088 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:06.259122 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:06.330663 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:06.330692 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:06.330708 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:06.409171 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:06.409216 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:06.450057 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:06.450091 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:09.000604 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:09.012513 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:09.012581 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:09.046483 1775552 cri.go:89] found id: ""
	I0127 12:35:09.046509 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.046518 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:09.046524 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:09.046580 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:09.077114 1775552 cri.go:89] found id: ""
	I0127 12:35:09.077151 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.077162 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:09.077170 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:09.077244 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:09.110941 1775552 cri.go:89] found id: ""
	I0127 12:35:09.110980 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.110992 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:09.111002 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:09.111068 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:09.145509 1775552 cri.go:89] found id: ""
	I0127 12:35:09.145544 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.145556 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:09.145564 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:09.145631 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:09.180617 1775552 cri.go:89] found id: ""
	I0127 12:35:09.180653 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.180665 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:09.180673 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:09.180745 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:09.215592 1775552 cri.go:89] found id: ""
	I0127 12:35:09.215644 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.215653 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:09.215659 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:09.215715 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:09.246511 1775552 cri.go:89] found id: ""
	I0127 12:35:09.246547 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.246559 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:09.246567 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:09.246632 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:09.278703 1775552 cri.go:89] found id: ""
	I0127 12:35:09.278735 1775552 logs.go:282] 0 containers: []
	W0127 12:35:09.278762 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:09.278777 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:09.278794 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:09.293975 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:09.294008 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:09.389953 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:09.389990 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:09.390009 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:09.480438 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:09.480477 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:09.516638 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:09.516669 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:12.065868 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:12.078096 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:12.078173 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:12.113203 1775552 cri.go:89] found id: ""
	I0127 12:35:12.113235 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.113244 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:12.113250 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:12.113314 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:12.151766 1775552 cri.go:89] found id: ""
	I0127 12:35:12.151799 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.151812 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:12.151819 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:12.151883 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:12.184153 1775552 cri.go:89] found id: ""
	I0127 12:35:12.184182 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.184190 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:12.184196 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:12.184258 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:12.216060 1775552 cri.go:89] found id: ""
	I0127 12:35:12.216085 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.216094 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:12.216099 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:12.216151 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:12.250247 1775552 cri.go:89] found id: ""
	I0127 12:35:12.250275 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.250286 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:12.250293 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:12.250361 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:12.281709 1775552 cri.go:89] found id: ""
	I0127 12:35:12.281747 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.281759 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:12.281769 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:12.281837 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:12.314436 1775552 cri.go:89] found id: ""
	I0127 12:35:12.314475 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.314491 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:12.314501 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:12.314578 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:12.349321 1775552 cri.go:89] found id: ""
	I0127 12:35:12.349353 1775552 logs.go:282] 0 containers: []
	W0127 12:35:12.349361 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:12.349371 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:12.349386 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:12.398004 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:12.398033 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:12.412079 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:12.412106 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:12.484078 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:12.484100 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:12.484115 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:12.566022 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:12.566056 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:15.106215 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:15.126024 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:15.126092 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:15.171619 1775552 cri.go:89] found id: ""
	I0127 12:35:15.171659 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.171672 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:15.171680 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:15.171764 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:15.209146 1775552 cri.go:89] found id: ""
	I0127 12:35:15.209189 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.209203 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:15.209212 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:15.209287 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:15.240651 1775552 cri.go:89] found id: ""
	I0127 12:35:15.240680 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.240692 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:15.240700 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:15.240772 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:15.276446 1775552 cri.go:89] found id: ""
	I0127 12:35:15.276482 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.276495 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:15.276504 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:15.276564 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:15.315308 1775552 cri.go:89] found id: ""
	I0127 12:35:15.315340 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.315351 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:15.315359 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:15.315425 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:15.347053 1775552 cri.go:89] found id: ""
	I0127 12:35:15.347103 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.347127 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:15.347136 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:15.347216 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:15.379856 1775552 cri.go:89] found id: ""
	I0127 12:35:15.379891 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.379903 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:15.379911 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:15.379982 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:15.413097 1775552 cri.go:89] found id: ""
	I0127 12:35:15.413132 1775552 logs.go:282] 0 containers: []
	W0127 12:35:15.413144 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:15.413157 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:15.413180 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:15.425711 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:15.425745 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:15.493995 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:15.494024 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:15.494040 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:15.568220 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:15.568261 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:15.603249 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:15.603275 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:18.150893 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:18.163663 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:18.163746 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:18.197887 1775552 cri.go:89] found id: ""
	I0127 12:35:18.197921 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.197934 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:18.197942 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:18.198005 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:18.229761 1775552 cri.go:89] found id: ""
	I0127 12:35:18.229797 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.229808 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:18.229819 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:18.229899 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:18.264565 1775552 cri.go:89] found id: ""
	I0127 12:35:18.264593 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.264602 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:18.264654 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:18.264717 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:18.299470 1775552 cri.go:89] found id: ""
	I0127 12:35:18.299507 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.299520 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:18.299529 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:18.299595 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:18.330782 1775552 cri.go:89] found id: ""
	I0127 12:35:18.330820 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.330837 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:18.330851 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:18.330923 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:18.362427 1775552 cri.go:89] found id: ""
	I0127 12:35:18.362453 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.362461 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:18.362467 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:18.362531 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:18.394544 1775552 cri.go:89] found id: ""
	I0127 12:35:18.394577 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.394589 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:18.394599 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:18.394669 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:18.427319 1775552 cri.go:89] found id: ""
	I0127 12:35:18.427351 1775552 logs.go:282] 0 containers: []
	W0127 12:35:18.427361 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:18.427377 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:18.427394 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:18.439165 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:18.439194 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:18.505659 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:18.505684 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:18.505699 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:18.584035 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:18.584074 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:18.621287 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:18.621326 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:21.174403 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:21.187586 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:21.187645 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:21.222724 1775552 cri.go:89] found id: ""
	I0127 12:35:21.222767 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.222779 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:21.222788 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:21.222854 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:21.254452 1775552 cri.go:89] found id: ""
	I0127 12:35:21.254489 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.254501 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:21.254509 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:21.254565 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:21.287662 1775552 cri.go:89] found id: ""
	I0127 12:35:21.287697 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.287710 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:21.287718 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:21.287790 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:21.323568 1775552 cri.go:89] found id: ""
	I0127 12:35:21.323597 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.323605 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:21.323613 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:21.323677 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:21.354503 1775552 cri.go:89] found id: ""
	I0127 12:35:21.354529 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.354537 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:21.354543 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:21.354596 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:21.387723 1775552 cri.go:89] found id: ""
	I0127 12:35:21.387751 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.387759 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:21.387765 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:21.387817 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:21.417623 1775552 cri.go:89] found id: ""
	I0127 12:35:21.417651 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.417662 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:21.417670 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:21.417736 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:21.449128 1775552 cri.go:89] found id: ""
	I0127 12:35:21.449170 1775552 logs.go:282] 0 containers: []
	W0127 12:35:21.449183 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:21.449196 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:21.449213 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:21.505002 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:21.505043 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:21.518804 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:21.518845 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:21.591209 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:21.591236 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:21.591253 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:21.669103 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:21.669143 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:24.206890 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:24.220419 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:24.220509 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:24.257717 1775552 cri.go:89] found id: ""
	I0127 12:35:24.257751 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.257763 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:24.257771 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:24.257829 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:24.296731 1775552 cri.go:89] found id: ""
	I0127 12:35:24.296769 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.296781 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:24.296789 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:24.296846 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:24.333186 1775552 cri.go:89] found id: ""
	I0127 12:35:24.333213 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.333224 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:24.333232 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:24.333297 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:24.366616 1775552 cri.go:89] found id: ""
	I0127 12:35:24.366647 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.366658 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:24.366667 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:24.366732 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:24.398705 1775552 cri.go:89] found id: ""
	I0127 12:35:24.398732 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.398754 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:24.398763 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:24.398831 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:24.429071 1775552 cri.go:89] found id: ""
	I0127 12:35:24.429097 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.429105 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:24.429110 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:24.429178 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:24.462849 1775552 cri.go:89] found id: ""
	I0127 12:35:24.462876 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.462886 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:24.462893 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:24.462964 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:24.494075 1775552 cri.go:89] found id: ""
	I0127 12:35:24.494103 1775552 logs.go:282] 0 containers: []
	W0127 12:35:24.494111 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:24.494121 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:24.494135 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:24.544504 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:24.544535 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:24.557390 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:24.557423 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:24.629491 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:24.629516 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:24.629532 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:24.711291 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:24.711328 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:27.253141 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:27.266836 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:27.266901 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:27.301444 1775552 cri.go:89] found id: ""
	I0127 12:35:27.301481 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.301493 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:27.301502 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:27.301564 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:27.336475 1775552 cri.go:89] found id: ""
	I0127 12:35:27.336519 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.336532 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:27.336539 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:27.336607 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:27.369541 1775552 cri.go:89] found id: ""
	I0127 12:35:27.369574 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.369585 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:27.369593 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:27.369663 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:27.403403 1775552 cri.go:89] found id: ""
	I0127 12:35:27.403438 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.403450 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:27.403458 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:27.403529 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:27.433917 1775552 cri.go:89] found id: ""
	I0127 12:35:27.433944 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.433951 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:27.433957 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:27.434021 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:27.468541 1775552 cri.go:89] found id: ""
	I0127 12:35:27.468571 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.468581 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:27.468596 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:27.468674 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:27.500865 1775552 cri.go:89] found id: ""
	I0127 12:35:27.500900 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.500917 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:27.500925 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:27.500988 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:27.531469 1775552 cri.go:89] found id: ""
	I0127 12:35:27.531503 1775552 logs.go:282] 0 containers: []
	W0127 12:35:27.531515 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:27.531528 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:27.531547 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:27.605193 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:27.605216 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:27.605228 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:27.686187 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:27.686233 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:27.726447 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:27.726475 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:27.776923 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:27.776959 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:30.290851 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:30.306067 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:30.306144 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:30.344113 1775552 cri.go:89] found id: ""
	I0127 12:35:30.344145 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.344156 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:30.344165 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:30.344230 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:30.375320 1775552 cri.go:89] found id: ""
	I0127 12:35:30.375354 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.375365 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:30.375373 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:30.375450 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:30.409624 1775552 cri.go:89] found id: ""
	I0127 12:35:30.409657 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.409668 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:30.409675 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:30.409753 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:30.443483 1775552 cri.go:89] found id: ""
	I0127 12:35:30.443513 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.443524 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:30.443532 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:30.443585 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:30.475439 1775552 cri.go:89] found id: ""
	I0127 12:35:30.475469 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.475479 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:30.475487 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:30.475553 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:30.512379 1775552 cri.go:89] found id: ""
	I0127 12:35:30.512407 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.512416 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:30.512423 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:30.512485 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:30.545183 1775552 cri.go:89] found id: ""
	I0127 12:35:30.545210 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.545219 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:30.545226 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:30.545293 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:30.575400 1775552 cri.go:89] found id: ""
	I0127 12:35:30.575429 1775552 logs.go:282] 0 containers: []
	W0127 12:35:30.575438 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:30.575448 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:30.575465 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:30.588402 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:30.588436 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:30.658194 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:30.658222 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:30.658235 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:30.730711 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:30.730764 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:30.766087 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:30.766117 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:33.319045 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:33.333860 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:33.333926 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:33.372828 1775552 cri.go:89] found id: ""
	I0127 12:35:33.372858 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.372869 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:33.372876 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:33.372950 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:33.403517 1775552 cri.go:89] found id: ""
	I0127 12:35:33.403544 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.403553 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:33.403559 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:33.403611 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:33.433971 1775552 cri.go:89] found id: ""
	I0127 12:35:33.434001 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.434013 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:33.434021 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:33.434088 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:33.468106 1775552 cri.go:89] found id: ""
	I0127 12:35:33.468135 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.468146 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:33.468154 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:33.468216 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:33.499252 1775552 cri.go:89] found id: ""
	I0127 12:35:33.499283 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.499292 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:33.499299 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:33.499361 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:33.535719 1775552 cri.go:89] found id: ""
	I0127 12:35:33.535749 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.535760 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:33.535769 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:33.535843 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:33.569141 1775552 cri.go:89] found id: ""
	I0127 12:35:33.569172 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.569181 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:33.569187 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:33.569253 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:33.603619 1775552 cri.go:89] found id: ""
	I0127 12:35:33.603641 1775552 logs.go:282] 0 containers: []
	W0127 12:35:33.603648 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:33.603657 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:33.603668 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:33.657495 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:33.657540 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:33.670287 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:33.670314 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:33.737391 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:33.737425 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:33.737442 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:33.819533 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:33.819566 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:36.357804 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:36.370627 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:36.370702 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:36.403977 1775552 cri.go:89] found id: ""
	I0127 12:35:36.404010 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.404022 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:36.404030 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:36.404094 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:36.436431 1775552 cri.go:89] found id: ""
	I0127 12:35:36.436460 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.436471 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:36.436478 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:36.436545 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:36.467932 1775552 cri.go:89] found id: ""
	I0127 12:35:36.467964 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.467974 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:36.467982 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:36.468050 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:36.498901 1775552 cri.go:89] found id: ""
	I0127 12:35:36.498936 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.498949 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:36.498958 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:36.499022 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:36.530538 1775552 cri.go:89] found id: ""
	I0127 12:35:36.530571 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.530583 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:36.530591 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:36.530655 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:36.561505 1775552 cri.go:89] found id: ""
	I0127 12:35:36.561538 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.561552 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:36.561559 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:36.561626 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:36.592335 1775552 cri.go:89] found id: ""
	I0127 12:35:36.592368 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.592380 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:36.592388 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:36.592469 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:36.628397 1775552 cri.go:89] found id: ""
	I0127 12:35:36.628436 1775552 logs.go:282] 0 containers: []
	W0127 12:35:36.628449 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:36.628464 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:36.628484 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:36.678216 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:36.678256 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:36.692370 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:36.692401 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:36.758121 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:36.758159 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:36.758176 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:36.834347 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:36.834377 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:39.377451 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:39.390810 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:39.390899 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:39.421924 1775552 cri.go:89] found id: ""
	I0127 12:35:39.421955 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.421967 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:39.421975 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:39.422039 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:39.452770 1775552 cri.go:89] found id: ""
	I0127 12:35:39.452803 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.452821 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:39.452828 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:39.452884 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:39.483114 1775552 cri.go:89] found id: ""
	I0127 12:35:39.483138 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.483146 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:39.483151 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:39.483219 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:39.515844 1775552 cri.go:89] found id: ""
	I0127 12:35:39.515873 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.515881 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:39.515887 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:39.515953 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:39.543551 1775552 cri.go:89] found id: ""
	I0127 12:35:39.543582 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.543592 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:39.543600 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:39.543666 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:39.574322 1775552 cri.go:89] found id: ""
	I0127 12:35:39.574350 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.574362 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:39.574402 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:39.574454 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:39.603353 1775552 cri.go:89] found id: ""
	I0127 12:35:39.603382 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.603392 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:39.603401 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:39.603464 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:39.632515 1775552 cri.go:89] found id: ""
	I0127 12:35:39.632545 1775552 logs.go:282] 0 containers: []
	W0127 12:35:39.632555 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:39.632568 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:39.632582 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:39.708870 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:39.708905 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:39.743747 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:39.743778 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:39.790810 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:39.790846 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:39.803791 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:39.803815 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:39.869953 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:42.370769 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:42.384630 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:42.384690 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:42.414733 1775552 cri.go:89] found id: ""
	I0127 12:35:42.414768 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.414780 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:42.414788 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:42.414845 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:42.445028 1775552 cri.go:89] found id: ""
	I0127 12:35:42.445052 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.445060 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:42.445066 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:42.445113 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:42.479557 1775552 cri.go:89] found id: ""
	I0127 12:35:42.479589 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.479600 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:42.479608 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:42.479663 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:42.511317 1775552 cri.go:89] found id: ""
	I0127 12:35:42.511345 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.511356 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:42.511364 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:42.511429 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:42.541949 1775552 cri.go:89] found id: ""
	I0127 12:35:42.541974 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.541983 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:42.541991 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:42.542050 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:42.570757 1775552 cri.go:89] found id: ""
	I0127 12:35:42.570784 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.570792 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:42.570800 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:42.570868 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:42.602478 1775552 cri.go:89] found id: ""
	I0127 12:35:42.602498 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.602505 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:42.602510 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:42.602565 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:42.632528 1775552 cri.go:89] found id: ""
	I0127 12:35:42.632558 1775552 logs.go:282] 0 containers: []
	W0127 12:35:42.632567 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:42.632577 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:42.632588 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:42.681842 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:42.681882 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:42.695665 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:42.695695 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:42.760815 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:42.760845 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:42.760866 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:42.832224 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:42.832259 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:45.366894 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:45.380401 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:45.380477 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:45.414951 1775552 cri.go:89] found id: ""
	I0127 12:35:45.414975 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.414983 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:45.414994 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:45.415048 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:45.452674 1775552 cri.go:89] found id: ""
	I0127 12:35:45.452705 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.452715 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:45.452721 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:45.452783 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:45.485177 1775552 cri.go:89] found id: ""
	I0127 12:35:45.485204 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.485215 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:45.485221 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:45.485284 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:45.515870 1775552 cri.go:89] found id: ""
	I0127 12:35:45.515894 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.515902 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:45.515908 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:45.515963 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:45.546266 1775552 cri.go:89] found id: ""
	I0127 12:35:45.546297 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.546308 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:45.546316 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:45.546387 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:45.576818 1775552 cri.go:89] found id: ""
	I0127 12:35:45.576849 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.576857 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:45.576863 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:45.576924 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:45.612463 1775552 cri.go:89] found id: ""
	I0127 12:35:45.612491 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.612502 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:45.612509 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:45.612566 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:45.646295 1775552 cri.go:89] found id: ""
	I0127 12:35:45.646324 1775552 logs.go:282] 0 containers: []
	W0127 12:35:45.646334 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:45.646349 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:45.646364 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:45.702555 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:45.702589 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:45.715119 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:45.715151 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:45.780468 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:45.780491 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:45.780504 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:45.856674 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:45.856711 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:48.396045 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:48.410880 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:48.410950 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:48.447064 1775552 cri.go:89] found id: ""
	I0127 12:35:48.447095 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.447107 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:48.447115 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:48.447180 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:48.482027 1775552 cri.go:89] found id: ""
	I0127 12:35:48.482055 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.482063 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:48.482069 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:48.482121 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:48.514525 1775552 cri.go:89] found id: ""
	I0127 12:35:48.514550 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.514561 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:48.514568 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:48.514630 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:48.545069 1775552 cri.go:89] found id: ""
	I0127 12:35:48.545098 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.545108 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:48.545114 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:48.545172 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:48.576465 1775552 cri.go:89] found id: ""
	I0127 12:35:48.576497 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.576507 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:48.576513 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:48.576563 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:48.615341 1775552 cri.go:89] found id: ""
	I0127 12:35:48.615366 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.615375 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:48.615383 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:48.615441 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:48.647862 1775552 cri.go:89] found id: ""
	I0127 12:35:48.647894 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.647906 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:48.647913 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:48.647995 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:48.679712 1775552 cri.go:89] found id: ""
	I0127 12:35:48.679761 1775552 logs.go:282] 0 containers: []
	W0127 12:35:48.679773 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:48.679787 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:48.679802 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:48.714964 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:48.714999 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:48.765800 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:48.765835 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:48.777808 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:48.777838 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:48.860036 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:48.860062 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:48.860090 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:51.436943 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:51.449951 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:51.450039 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:51.484549 1775552 cri.go:89] found id: ""
	I0127 12:35:51.484580 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.484589 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:51.484596 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:51.484656 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:51.516105 1775552 cri.go:89] found id: ""
	I0127 12:35:51.516132 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.516141 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:51.516147 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:51.516220 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:51.552323 1775552 cri.go:89] found id: ""
	I0127 12:35:51.552353 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.552362 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:51.552369 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:51.552438 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:51.584121 1775552 cri.go:89] found id: ""
	I0127 12:35:51.584150 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.584158 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:51.584164 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:51.584230 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:51.622841 1775552 cri.go:89] found id: ""
	I0127 12:35:51.622873 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.622886 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:51.622894 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:51.622959 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:51.660612 1775552 cri.go:89] found id: ""
	I0127 12:35:51.660645 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.660656 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:51.660663 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:51.660718 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:51.698262 1775552 cri.go:89] found id: ""
	I0127 12:35:51.698295 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.698307 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:51.698314 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:51.698373 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:51.731694 1775552 cri.go:89] found id: ""
	I0127 12:35:51.731729 1775552 logs.go:282] 0 containers: []
	W0127 12:35:51.731737 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:51.731749 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:51.731764 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:51.799138 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:51.799164 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:51.799178 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:51.882973 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:51.883013 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:51.921467 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:51.921496 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:51.972736 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:51.972771 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:54.487966 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:54.500530 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:54.500596 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:54.533420 1775552 cri.go:89] found id: ""
	I0127 12:35:54.533447 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.533455 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:54.533463 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:54.533528 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:54.563239 1775552 cri.go:89] found id: ""
	I0127 12:35:54.563264 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.563273 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:54.563282 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:54.563348 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:54.595276 1775552 cri.go:89] found id: ""
	I0127 12:35:54.595306 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.595316 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:54.595323 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:54.595384 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:54.625407 1775552 cri.go:89] found id: ""
	I0127 12:35:54.625433 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.625441 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:54.625447 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:54.625504 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:54.672054 1775552 cri.go:89] found id: ""
	I0127 12:35:54.672086 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.672098 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:54.672106 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:54.672160 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:54.713131 1775552 cri.go:89] found id: ""
	I0127 12:35:54.713158 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.713167 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:54.713173 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:54.713227 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:54.746504 1775552 cri.go:89] found id: ""
	I0127 12:35:54.746532 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.746545 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:54.746553 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:54.746607 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:54.778490 1775552 cri.go:89] found id: ""
	I0127 12:35:54.778517 1775552 logs.go:282] 0 containers: []
	W0127 12:35:54.778528 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:54.778541 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:54.778560 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:54.845124 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:54.845152 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:54.845176 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:35:54.916934 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:54.916971 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:54.952911 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:54.952956 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:55.004235 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:55.004271 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:57.518026 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:57.531480 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:35:57.531560 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:35:57.564598 1775552 cri.go:89] found id: ""
	I0127 12:35:57.564633 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.564644 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:35:57.564650 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:35:57.564718 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:35:57.600012 1775552 cri.go:89] found id: ""
	I0127 12:35:57.600040 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.600050 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:35:57.600058 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:35:57.600128 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:35:57.630225 1775552 cri.go:89] found id: ""
	I0127 12:35:57.630256 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.630264 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:35:57.630270 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:35:57.630324 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:35:57.665288 1775552 cri.go:89] found id: ""
	I0127 12:35:57.665331 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.665346 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:35:57.665355 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:35:57.665417 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:35:57.698262 1775552 cri.go:89] found id: ""
	I0127 12:35:57.698291 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.698299 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:35:57.698305 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:35:57.698357 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:35:57.729859 1775552 cri.go:89] found id: ""
	I0127 12:35:57.729893 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.729905 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:35:57.729912 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:35:57.729980 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:35:57.761853 1775552 cri.go:89] found id: ""
	I0127 12:35:57.761879 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.761890 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:35:57.761896 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:35:57.761950 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:35:57.792815 1775552 cri.go:89] found id: ""
	I0127 12:35:57.792847 1775552 logs.go:282] 0 containers: []
	W0127 12:35:57.792855 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:35:57.792865 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:35:57.792883 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:35:57.829348 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:35:57.829378 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:35:57.881085 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:35:57.881124 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:35:57.894103 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:35:57.894129 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:35:57.958846 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:35:57.958875 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:35:57.958892 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:00.535009 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:00.548408 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:00.548484 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:00.578807 1775552 cri.go:89] found id: ""
	I0127 12:36:00.578852 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.578865 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:00.578873 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:00.578942 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:00.611153 1775552 cri.go:89] found id: ""
	I0127 12:36:00.611179 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.611187 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:00.611193 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:00.611240 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:00.649907 1775552 cri.go:89] found id: ""
	I0127 12:36:00.649941 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.649952 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:00.649960 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:00.650026 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:00.684041 1775552 cri.go:89] found id: ""
	I0127 12:36:00.684078 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.684089 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:00.684097 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:00.684173 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:00.724823 1775552 cri.go:89] found id: ""
	I0127 12:36:00.724847 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.724855 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:00.724861 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:00.724913 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:00.759507 1775552 cri.go:89] found id: ""
	I0127 12:36:00.759540 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.759551 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:00.759559 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:00.759627 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:00.793961 1775552 cri.go:89] found id: ""
	I0127 12:36:00.793994 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.794006 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:00.794014 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:00.794078 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:00.827577 1775552 cri.go:89] found id: ""
	I0127 12:36:00.827601 1775552 logs.go:282] 0 containers: []
	W0127 12:36:00.827609 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:00.827617 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:00.827630 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:00.840155 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:00.840181 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:00.901457 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:00.901483 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:00.901496 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:00.977229 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:00.977263 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:01.015567 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:01.015599 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:03.582874 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:03.596107 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:03.596188 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:03.632907 1775552 cri.go:89] found id: ""
	I0127 12:36:03.632938 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.632949 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:03.632957 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:03.633018 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:03.665411 1775552 cri.go:89] found id: ""
	I0127 12:36:03.665442 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.665451 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:03.665457 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:03.665508 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:03.703253 1775552 cri.go:89] found id: ""
	I0127 12:36:03.703286 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.703295 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:03.703301 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:03.703364 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:03.735020 1775552 cri.go:89] found id: ""
	I0127 12:36:03.735048 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.735056 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:03.735062 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:03.735118 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:03.767918 1775552 cri.go:89] found id: ""
	I0127 12:36:03.767953 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.767964 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:03.767983 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:03.768059 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:03.801070 1775552 cri.go:89] found id: ""
	I0127 12:36:03.801103 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.801114 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:03.801123 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:03.801195 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:03.834142 1775552 cri.go:89] found id: ""
	I0127 12:36:03.834179 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.834191 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:03.834198 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:03.834253 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:03.867955 1775552 cri.go:89] found id: ""
	I0127 12:36:03.867992 1775552 logs.go:282] 0 containers: []
	W0127 12:36:03.868007 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:03.868020 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:03.868033 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:03.917905 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:03.917939 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:03.930584 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:03.930612 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:03.993421 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:03.993445 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:03.993465 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:04.068261 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:04.068294 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:06.613352 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:06.626665 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:06.626727 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:06.662510 1775552 cri.go:89] found id: ""
	I0127 12:36:06.662536 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.662545 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:06.662551 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:06.662614 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:06.697618 1775552 cri.go:89] found id: ""
	I0127 12:36:06.697649 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.697657 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:06.697663 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:06.697723 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:06.731340 1775552 cri.go:89] found id: ""
	I0127 12:36:06.731364 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.731373 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:06.731379 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:06.731435 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:06.767143 1775552 cri.go:89] found id: ""
	I0127 12:36:06.767176 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.767184 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:06.767190 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:06.767255 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:06.811429 1775552 cri.go:89] found id: ""
	I0127 12:36:06.811460 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.811471 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:06.811478 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:06.811546 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:06.857105 1775552 cri.go:89] found id: ""
	I0127 12:36:06.857134 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.857142 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:06.857149 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:06.857225 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:06.896595 1775552 cri.go:89] found id: ""
	I0127 12:36:06.896625 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.896634 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:06.896640 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:06.896704 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:06.935056 1775552 cri.go:89] found id: ""
	I0127 12:36:06.935089 1775552 logs.go:282] 0 containers: []
	W0127 12:36:06.935101 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:06.935113 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:06.935128 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:06.999875 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:06.999903 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:06.999920 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:07.072938 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:07.072978 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:07.114542 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:07.114584 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:07.166612 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:07.166644 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:09.679879 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:09.693365 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:09.693442 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:09.727158 1775552 cri.go:89] found id: ""
	I0127 12:36:09.727183 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.727191 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:09.727197 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:09.727247 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:09.761481 1775552 cri.go:89] found id: ""
	I0127 12:36:09.761509 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.761518 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:09.761526 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:09.761580 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:09.791886 1775552 cri.go:89] found id: ""
	I0127 12:36:09.791913 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.791923 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:09.791930 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:09.791996 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:09.823052 1775552 cri.go:89] found id: ""
	I0127 12:36:09.823085 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.823102 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:09.823109 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:09.823163 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:09.853874 1775552 cri.go:89] found id: ""
	I0127 12:36:09.853903 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.853916 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:09.853922 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:09.853979 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:09.885331 1775552 cri.go:89] found id: ""
	I0127 12:36:09.885360 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.885372 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:09.885380 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:09.885444 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:09.915908 1775552 cri.go:89] found id: ""
	I0127 12:36:09.915939 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.915949 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:09.915962 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:09.916046 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:09.948670 1775552 cri.go:89] found id: ""
	I0127 12:36:09.948700 1775552 logs.go:282] 0 containers: []
	W0127 12:36:09.948712 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:09.948726 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:09.948742 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:10.000081 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:10.000113 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:10.012912 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:10.012943 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:10.079791 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:10.079819 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:10.079835 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:10.154522 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:10.154560 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:12.691484 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:12.703886 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:12.703950 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:12.742102 1775552 cri.go:89] found id: ""
	I0127 12:36:12.742135 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.742147 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:12.742154 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:12.742228 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:12.773255 1775552 cri.go:89] found id: ""
	I0127 12:36:12.773280 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.773288 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:12.773294 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:12.773344 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:12.804506 1775552 cri.go:89] found id: ""
	I0127 12:36:12.804537 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.804549 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:12.804557 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:12.804627 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:12.838141 1775552 cri.go:89] found id: ""
	I0127 12:36:12.838171 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.838183 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:12.838191 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:12.838257 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:12.872616 1775552 cri.go:89] found id: ""
	I0127 12:36:12.872651 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.872663 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:12.872671 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:12.872742 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:12.903490 1775552 cri.go:89] found id: ""
	I0127 12:36:12.903520 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.903531 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:12.903539 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:12.903600 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:12.932977 1775552 cri.go:89] found id: ""
	I0127 12:36:12.933002 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.933010 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:12.933016 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:12.933083 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:12.964262 1775552 cri.go:89] found id: ""
	I0127 12:36:12.964293 1775552 logs.go:282] 0 containers: []
	W0127 12:36:12.964305 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:12.964317 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:12.964332 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:13.015826 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:13.015853 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:13.027833 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:13.027857 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:13.089874 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:13.089900 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:13.089917 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:13.171795 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:13.171831 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:15.707782 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:15.721071 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:15.721157 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:15.752339 1775552 cri.go:89] found id: ""
	I0127 12:36:15.752371 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.752380 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:15.752386 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:15.752447 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:15.781650 1775552 cri.go:89] found id: ""
	I0127 12:36:15.781678 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.781689 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:15.781696 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:15.781770 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:15.813399 1775552 cri.go:89] found id: ""
	I0127 12:36:15.813431 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.813441 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:15.813449 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:15.813514 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:15.845521 1775552 cri.go:89] found id: ""
	I0127 12:36:15.845552 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.845565 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:15.845572 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:15.845638 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:15.881247 1775552 cri.go:89] found id: ""
	I0127 12:36:15.881278 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.881288 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:15.881294 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:15.881363 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:15.916228 1775552 cri.go:89] found id: ""
	I0127 12:36:15.916262 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.916273 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:15.916282 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:15.916354 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:15.946842 1775552 cri.go:89] found id: ""
	I0127 12:36:15.946868 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.946878 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:15.946886 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:15.946951 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:15.981270 1775552 cri.go:89] found id: ""
	I0127 12:36:15.981298 1775552 logs.go:282] 0 containers: []
	W0127 12:36:15.981306 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:15.981316 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:15.981330 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:16.052166 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:16.052191 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:16.052203 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:16.136733 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:16.136768 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:16.176496 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:16.176529 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:16.228733 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:16.228764 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:18.742574 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:18.756160 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:18.756227 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:18.787145 1775552 cri.go:89] found id: ""
	I0127 12:36:18.787177 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.787189 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:18.787199 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:18.787266 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:18.821640 1775552 cri.go:89] found id: ""
	I0127 12:36:18.821665 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.821675 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:18.821683 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:18.821745 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:18.854530 1775552 cri.go:89] found id: ""
	I0127 12:36:18.854562 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.854573 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:18.854580 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:18.854649 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:18.886725 1775552 cri.go:89] found id: ""
	I0127 12:36:18.886768 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.886780 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:18.886788 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:18.886862 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:18.915612 1775552 cri.go:89] found id: ""
	I0127 12:36:18.915640 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.915650 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:18.915658 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:18.915729 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:18.946563 1775552 cri.go:89] found id: ""
	I0127 12:36:18.946592 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.946603 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:18.946611 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:18.946675 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:18.980288 1775552 cri.go:89] found id: ""
	I0127 12:36:18.980311 1775552 logs.go:282] 0 containers: []
	W0127 12:36:18.980321 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:18.980328 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:18.980394 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:19.010973 1775552 cri.go:89] found id: ""
	I0127 12:36:19.010997 1775552 logs.go:282] 0 containers: []
	W0127 12:36:19.011006 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:19.011020 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:19.011037 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:19.022878 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:19.022902 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:19.097729 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:19.097751 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:19.097763 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:19.175700 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:19.175735 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:19.213663 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:19.213703 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:21.764395 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:21.776532 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:21.776615 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:21.808515 1775552 cri.go:89] found id: ""
	I0127 12:36:21.808549 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.808558 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:21.808564 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:21.808616 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:21.841771 1775552 cri.go:89] found id: ""
	I0127 12:36:21.841807 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.841820 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:21.841844 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:21.841921 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:21.874129 1775552 cri.go:89] found id: ""
	I0127 12:36:21.874154 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.874162 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:21.874168 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:21.874228 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:21.904789 1775552 cri.go:89] found id: ""
	I0127 12:36:21.904818 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.904827 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:21.904833 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:21.904885 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:21.935512 1775552 cri.go:89] found id: ""
	I0127 12:36:21.935538 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.935546 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:21.935552 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:21.935603 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:21.969092 1775552 cri.go:89] found id: ""
	I0127 12:36:21.969124 1775552 logs.go:282] 0 containers: []
	W0127 12:36:21.969134 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:21.969140 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:21.969202 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:22.003306 1775552 cri.go:89] found id: ""
	I0127 12:36:22.003339 1775552 logs.go:282] 0 containers: []
	W0127 12:36:22.003350 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:22.003359 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:22.003422 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:22.037391 1775552 cri.go:89] found id: ""
	I0127 12:36:22.037419 1775552 logs.go:282] 0 containers: []
	W0127 12:36:22.037431 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:22.037445 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:22.037461 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:22.078080 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:22.078115 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:22.126679 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:22.126711 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:22.139891 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:22.139921 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:22.206630 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:22.206659 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:22.206677 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:24.789482 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:24.802554 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:24.802624 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:24.841354 1775552 cri.go:89] found id: ""
	I0127 12:36:24.841389 1775552 logs.go:282] 0 containers: []
	W0127 12:36:24.841400 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:24.841408 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:24.841473 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:24.876959 1775552 cri.go:89] found id: ""
	I0127 12:36:24.876987 1775552 logs.go:282] 0 containers: []
	W0127 12:36:24.876999 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:24.877006 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:24.877067 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:24.913050 1775552 cri.go:89] found id: ""
	I0127 12:36:24.913085 1775552 logs.go:282] 0 containers: []
	W0127 12:36:24.913096 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:24.913105 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:24.913169 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:24.947773 1775552 cri.go:89] found id: ""
	I0127 12:36:24.947812 1775552 logs.go:282] 0 containers: []
	W0127 12:36:24.947823 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:24.947830 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:24.947900 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:24.977000 1775552 cri.go:89] found id: ""
	I0127 12:36:24.977026 1775552 logs.go:282] 0 containers: []
	W0127 12:36:24.977034 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:24.977040 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:24.977087 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:25.012334 1775552 cri.go:89] found id: ""
	I0127 12:36:25.012360 1775552 logs.go:282] 0 containers: []
	W0127 12:36:25.012373 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:25.012380 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:25.012432 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:25.046913 1775552 cri.go:89] found id: ""
	I0127 12:36:25.046941 1775552 logs.go:282] 0 containers: []
	W0127 12:36:25.046949 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:25.046955 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:25.047010 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:25.076751 1775552 cri.go:89] found id: ""
	I0127 12:36:25.076781 1775552 logs.go:282] 0 containers: []
	W0127 12:36:25.076791 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:25.076802 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:25.076815 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:25.111356 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:25.111382 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:25.160841 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:25.160875 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:25.173469 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:25.173492 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:25.238040 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:25.238067 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:25.238080 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:27.813298 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:27.828317 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:27.828398 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:27.866778 1775552 cri.go:89] found id: ""
	I0127 12:36:27.866814 1775552 logs.go:282] 0 containers: []
	W0127 12:36:27.866826 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:27.866833 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:27.866902 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:27.905265 1775552 cri.go:89] found id: ""
	I0127 12:36:27.905301 1775552 logs.go:282] 0 containers: []
	W0127 12:36:27.905313 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:27.905323 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:27.905396 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:27.943103 1775552 cri.go:89] found id: ""
	I0127 12:36:27.943134 1775552 logs.go:282] 0 containers: []
	W0127 12:36:27.943144 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:27.943152 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:27.943227 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:27.982438 1775552 cri.go:89] found id: ""
	I0127 12:36:27.982466 1775552 logs.go:282] 0 containers: []
	W0127 12:36:27.982476 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:27.982484 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:27.982553 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:28.020224 1775552 cri.go:89] found id: ""
	I0127 12:36:28.020252 1775552 logs.go:282] 0 containers: []
	W0127 12:36:28.020262 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:28.020270 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:28.020341 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:28.056213 1775552 cri.go:89] found id: ""
	I0127 12:36:28.056247 1775552 logs.go:282] 0 containers: []
	W0127 12:36:28.056255 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:28.056261 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:28.056318 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:28.089603 1775552 cri.go:89] found id: ""
	I0127 12:36:28.089635 1775552 logs.go:282] 0 containers: []
	W0127 12:36:28.089647 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:28.089655 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:28.089751 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:28.122423 1775552 cri.go:89] found id: ""
	I0127 12:36:28.122456 1775552 logs.go:282] 0 containers: []
	W0127 12:36:28.122468 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:28.122481 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:28.122497 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:28.176245 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:28.176294 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:28.191793 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:28.191822 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:28.262553 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:28.262580 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:28.262593 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:28.337551 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:28.337593 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:30.880432 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:30.893330 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:30.893408 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:30.925932 1775552 cri.go:89] found id: ""
	I0127 12:36:30.925970 1775552 logs.go:282] 0 containers: []
	W0127 12:36:30.925981 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:30.925990 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:30.926058 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:30.957818 1775552 cri.go:89] found id: ""
	I0127 12:36:30.957853 1775552 logs.go:282] 0 containers: []
	W0127 12:36:30.957867 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:30.957875 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:30.957942 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:30.991678 1775552 cri.go:89] found id: ""
	I0127 12:36:30.991710 1775552 logs.go:282] 0 containers: []
	W0127 12:36:30.991720 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:30.991729 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:30.991793 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:31.022337 1775552 cri.go:89] found id: ""
	I0127 12:36:31.022364 1775552 logs.go:282] 0 containers: []
	W0127 12:36:31.022373 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:31.022386 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:31.022448 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:31.052150 1775552 cri.go:89] found id: ""
	I0127 12:36:31.052189 1775552 logs.go:282] 0 containers: []
	W0127 12:36:31.052202 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:31.052210 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:31.052267 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:31.082796 1775552 cri.go:89] found id: ""
	I0127 12:36:31.082830 1775552 logs.go:282] 0 containers: []
	W0127 12:36:31.082838 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:31.082843 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:31.082896 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:31.114801 1775552 cri.go:89] found id: ""
	I0127 12:36:31.114835 1775552 logs.go:282] 0 containers: []
	W0127 12:36:31.114844 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:31.114850 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:31.114919 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:31.145010 1775552 cri.go:89] found id: ""
	I0127 12:36:31.145034 1775552 logs.go:282] 0 containers: []
	W0127 12:36:31.145045 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:31.145057 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:31.145070 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:31.157822 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:31.157858 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:31.228485 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:31.228510 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:31.228526 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:31.302360 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:31.302401 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:31.337290 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:31.337327 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:33.888291 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:33.900918 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:33.900994 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:33.934100 1775552 cri.go:89] found id: ""
	I0127 12:36:33.934133 1775552 logs.go:282] 0 containers: []
	W0127 12:36:33.934144 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:33.934151 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:33.934218 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:33.970123 1775552 cri.go:89] found id: ""
	I0127 12:36:33.970154 1775552 logs.go:282] 0 containers: []
	W0127 12:36:33.970166 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:33.970174 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:33.970240 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:34.006415 1775552 cri.go:89] found id: ""
	I0127 12:36:34.006452 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.006466 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:34.006475 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:34.006542 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:34.038329 1775552 cri.go:89] found id: ""
	I0127 12:36:34.038359 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.038371 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:34.038380 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:34.038446 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:34.073373 1775552 cri.go:89] found id: ""
	I0127 12:36:34.073404 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.073412 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:34.073418 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:34.073474 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:34.107849 1775552 cri.go:89] found id: ""
	I0127 12:36:34.107881 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.107892 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:34.107902 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:34.107971 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:34.148451 1775552 cri.go:89] found id: ""
	I0127 12:36:34.148485 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.148497 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:34.148506 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:34.148576 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:34.180265 1775552 cri.go:89] found id: ""
	I0127 12:36:34.180305 1775552 logs.go:282] 0 containers: []
	W0127 12:36:34.180315 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:34.180326 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:34.180341 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:34.251300 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:34.251326 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:34.251338 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:34.322318 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:34.322356 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:34.359409 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:34.359442 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:34.406650 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:34.406689 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:36.920452 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:36.932544 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:36.932604 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:36.964941 1775552 cri.go:89] found id: ""
	I0127 12:36:36.964964 1775552 logs.go:282] 0 containers: []
	W0127 12:36:36.964973 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:36.964978 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:36.965038 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:37.000338 1775552 cri.go:89] found id: ""
	I0127 12:36:37.000367 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.000378 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:37.000385 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:37.000449 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:37.031270 1775552 cri.go:89] found id: ""
	I0127 12:36:37.031299 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.031307 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:37.031313 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:37.031374 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:37.062628 1775552 cri.go:89] found id: ""
	I0127 12:36:37.062653 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.062662 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:37.062668 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:37.062726 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:37.093607 1775552 cri.go:89] found id: ""
	I0127 12:36:37.093642 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.093652 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:37.093658 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:37.093724 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:37.125741 1775552 cri.go:89] found id: ""
	I0127 12:36:37.125777 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.125789 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:37.125797 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:37.125853 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:37.157214 1775552 cri.go:89] found id: ""
	I0127 12:36:37.157250 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.157261 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:37.157268 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:37.157325 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:37.190300 1775552 cri.go:89] found id: ""
	I0127 12:36:37.190334 1775552 logs.go:282] 0 containers: []
	W0127 12:36:37.190345 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:37.190357 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:37.190378 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:37.226569 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:37.226595 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:37.279929 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:37.279968 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:37.293359 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:37.293397 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:37.360694 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:37.360724 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:37.360742 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:39.938905 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:39.952118 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:39.952184 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:39.989025 1775552 cri.go:89] found id: ""
	I0127 12:36:39.989053 1775552 logs.go:282] 0 containers: []
	W0127 12:36:39.989061 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:39.989067 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:39.989129 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:40.018922 1775552 cri.go:89] found id: ""
	I0127 12:36:40.018950 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.018957 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:40.018963 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:40.019012 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:40.049924 1775552 cri.go:89] found id: ""
	I0127 12:36:40.049958 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.049972 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:40.049980 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:40.050042 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:40.082346 1775552 cri.go:89] found id: ""
	I0127 12:36:40.082373 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.082381 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:40.082386 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:40.082440 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:40.119534 1775552 cri.go:89] found id: ""
	I0127 12:36:40.119567 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.119578 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:40.119585 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:40.119652 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:40.155925 1775552 cri.go:89] found id: ""
	I0127 12:36:40.155949 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.155957 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:40.155963 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:40.156025 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:40.186478 1775552 cri.go:89] found id: ""
	I0127 12:36:40.186504 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.186514 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:40.186522 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:40.186586 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:40.219464 1775552 cri.go:89] found id: ""
	I0127 12:36:40.219499 1775552 logs.go:282] 0 containers: []
	W0127 12:36:40.219511 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:40.219526 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:40.219542 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:40.272788 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:40.272819 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:40.287122 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:40.287148 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:40.353988 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:40.354010 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:40.354023 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:40.428383 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:40.428417 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:42.967933 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:42.980555 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:42.980614 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:43.016293 1775552 cri.go:89] found id: ""
	I0127 12:36:43.016320 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.016331 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:43.016338 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:43.016404 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:43.046638 1775552 cri.go:89] found id: ""
	I0127 12:36:43.046671 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.046682 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:43.046690 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:43.046772 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:43.078605 1775552 cri.go:89] found id: ""
	I0127 12:36:43.078638 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.078656 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:43.078665 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:43.078735 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:43.107580 1775552 cri.go:89] found id: ""
	I0127 12:36:43.107605 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.107612 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:43.107618 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:43.107678 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:43.137979 1775552 cri.go:89] found id: ""
	I0127 12:36:43.138010 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.138018 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:43.138024 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:43.138099 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:43.169562 1775552 cri.go:89] found id: ""
	I0127 12:36:43.169591 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.169599 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:43.169606 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:43.169658 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:43.200995 1775552 cri.go:89] found id: ""
	I0127 12:36:43.201031 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.201043 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:43.201050 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:43.201119 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:43.239432 1775552 cri.go:89] found id: ""
	I0127 12:36:43.239468 1775552 logs.go:282] 0 containers: []
	W0127 12:36:43.239481 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:43.239495 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:43.239515 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:43.252054 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:43.252085 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:43.312969 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:43.312995 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:43.313011 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:43.387305 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:43.387340 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:43.424488 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:43.424525 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:45.979867 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:45.992654 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:45.992726 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:46.025303 1775552 cri.go:89] found id: ""
	I0127 12:36:46.025332 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.025344 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:46.025351 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:46.025414 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:46.059448 1775552 cri.go:89] found id: ""
	I0127 12:36:46.059476 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.059487 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:46.059498 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:46.059563 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:46.092979 1775552 cri.go:89] found id: ""
	I0127 12:36:46.093019 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.093030 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:46.093036 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:46.093105 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:46.127443 1775552 cri.go:89] found id: ""
	I0127 12:36:46.127468 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.127476 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:46.127482 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:46.127542 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:46.162468 1775552 cri.go:89] found id: ""
	I0127 12:36:46.162490 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.162498 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:46.162504 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:46.162562 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:46.193037 1775552 cri.go:89] found id: ""
	I0127 12:36:46.193061 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.193070 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:46.193078 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:46.193137 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:46.224256 1775552 cri.go:89] found id: ""
	I0127 12:36:46.224281 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.224291 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:46.224299 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:46.224365 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:46.253278 1775552 cri.go:89] found id: ""
	I0127 12:36:46.253308 1775552 logs.go:282] 0 containers: []
	W0127 12:36:46.253320 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:46.253332 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:46.253346 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:46.266442 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:46.266474 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:46.334357 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:46.334390 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:46.334411 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:46.422338 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:46.422381 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:46.461683 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:46.461707 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:49.010537 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:49.022782 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:49.022864 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:49.054385 1775552 cri.go:89] found id: ""
	I0127 12:36:49.054419 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.054431 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:49.054438 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:49.054496 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:49.086391 1775552 cri.go:89] found id: ""
	I0127 12:36:49.086426 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.086441 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:49.086449 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:49.086528 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:49.117709 1775552 cri.go:89] found id: ""
	I0127 12:36:49.117742 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.117755 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:49.117762 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:49.117840 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:49.150827 1775552 cri.go:89] found id: ""
	I0127 12:36:49.150872 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.150882 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:49.150888 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:49.150949 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:49.182730 1775552 cri.go:89] found id: ""
	I0127 12:36:49.182771 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.182782 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:49.182790 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:49.182878 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:49.220126 1775552 cri.go:89] found id: ""
	I0127 12:36:49.220160 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.220179 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:49.220188 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:49.220250 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:49.250208 1775552 cri.go:89] found id: ""
	I0127 12:36:49.250240 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.250251 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:49.250259 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:49.250328 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:49.280244 1775552 cri.go:89] found id: ""
	I0127 12:36:49.280274 1775552 logs.go:282] 0 containers: []
	W0127 12:36:49.280286 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:49.280298 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:49.280311 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:49.329479 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:49.329510 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:49.344581 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:49.344615 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:49.411426 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:49.411450 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:49.411465 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:49.491434 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:49.491477 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:52.028724 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:52.041662 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:52.041728 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:52.077259 1775552 cri.go:89] found id: ""
	I0127 12:36:52.077285 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.077294 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:52.077302 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:52.077357 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:52.111529 1775552 cri.go:89] found id: ""
	I0127 12:36:52.111550 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.111556 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:52.111564 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:52.111620 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:52.148770 1775552 cri.go:89] found id: ""
	I0127 12:36:52.148797 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.148805 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:52.148811 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:52.148874 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:52.197014 1775552 cri.go:89] found id: ""
	I0127 12:36:52.197047 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.197059 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:52.197067 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:52.197135 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:52.234165 1775552 cri.go:89] found id: ""
	I0127 12:36:52.234195 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.234207 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:52.234216 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:52.234270 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:52.270959 1775552 cri.go:89] found id: ""
	I0127 12:36:52.270987 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.270998 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:52.271005 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:52.271057 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:52.310504 1775552 cri.go:89] found id: ""
	I0127 12:36:52.310529 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.310536 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:52.310541 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:52.310579 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:52.347542 1775552 cri.go:89] found id: ""
	I0127 12:36:52.347575 1775552 logs.go:282] 0 containers: []
	W0127 12:36:52.347587 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:52.347601 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:52.347621 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:52.420748 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:52.420777 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:52.420795 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:52.499152 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:52.499207 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:52.545304 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:52.545330 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:52.600189 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:52.600226 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:55.115095 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:55.127698 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:55.127777 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:55.163827 1775552 cri.go:89] found id: ""
	I0127 12:36:55.163857 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.163868 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:55.163876 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:55.163942 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:55.196965 1775552 cri.go:89] found id: ""
	I0127 12:36:55.196997 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.197008 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:55.197016 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:55.197083 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:55.230642 1775552 cri.go:89] found id: ""
	I0127 12:36:55.230665 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.230675 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:55.230683 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:55.230772 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:55.267647 1775552 cri.go:89] found id: ""
	I0127 12:36:55.267682 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.267694 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:55.267702 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:55.267772 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:55.306098 1775552 cri.go:89] found id: ""
	I0127 12:36:55.306132 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.306144 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:55.306153 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:55.306211 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:55.339805 1775552 cri.go:89] found id: ""
	I0127 12:36:55.339837 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.339849 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:55.339858 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:55.339916 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:55.371853 1775552 cri.go:89] found id: ""
	I0127 12:36:55.371879 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.371887 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:55.371893 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:55.371956 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:55.402771 1775552 cri.go:89] found id: ""
	I0127 12:36:55.402796 1775552 logs.go:282] 0 containers: []
	W0127 12:36:55.402805 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:55.402815 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:55.402832 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:55.415759 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:55.415789 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:55.486319 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:55.486351 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:55.486368 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:55.560336 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:55.560374 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:55.596185 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:55.596233 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:58.149638 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:58.162662 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:36:58.162719 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:36:58.196706 1775552 cri.go:89] found id: ""
	I0127 12:36:58.196733 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.196741 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:36:58.196746 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:36:58.196803 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:36:58.232963 1775552 cri.go:89] found id: ""
	I0127 12:36:58.233002 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.233013 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:36:58.233022 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:36:58.233089 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:36:58.265966 1775552 cri.go:89] found id: ""
	I0127 12:36:58.266000 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.266011 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:36:58.266023 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:36:58.266084 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:36:58.305592 1775552 cri.go:89] found id: ""
	I0127 12:36:58.305619 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.305627 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:36:58.305633 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:36:58.305694 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:36:58.337114 1775552 cri.go:89] found id: ""
	I0127 12:36:58.337146 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.337157 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:36:58.337166 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:36:58.337232 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:36:58.374537 1775552 cri.go:89] found id: ""
	I0127 12:36:58.374560 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.374568 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:36:58.374574 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:36:58.374622 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:36:58.408167 1775552 cri.go:89] found id: ""
	I0127 12:36:58.408199 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.408210 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:36:58.408219 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:36:58.408286 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:36:58.442196 1775552 cri.go:89] found id: ""
	I0127 12:36:58.442222 1775552 logs.go:282] 0 containers: []
	W0127 12:36:58.442231 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:36:58.442242 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:58.442261 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:58.495070 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:58.495114 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:58.511036 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:58.511074 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:36:58.632085 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:36:58.632112 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:36:58.632128 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:36:58.719683 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:36:58.719728 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:01.259104 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:01.271886 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:01.271966 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:01.306919 1775552 cri.go:89] found id: ""
	I0127 12:37:01.306953 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.306965 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:01.306974 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:01.307036 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:01.339574 1775552 cri.go:89] found id: ""
	I0127 12:37:01.339612 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.339624 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:01.339641 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:01.339708 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:01.382068 1775552 cri.go:89] found id: ""
	I0127 12:37:01.382100 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.382109 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:01.382115 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:01.382182 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:01.425289 1775552 cri.go:89] found id: ""
	I0127 12:37:01.425316 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.425324 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:01.425331 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:01.425392 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:01.459600 1775552 cri.go:89] found id: ""
	I0127 12:37:01.459641 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.459656 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:01.459666 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:01.459742 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:01.499359 1775552 cri.go:89] found id: ""
	I0127 12:37:01.499398 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.499408 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:01.499414 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:01.499479 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:01.530497 1775552 cri.go:89] found id: ""
	I0127 12:37:01.530526 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.530533 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:01.530539 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:01.530590 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:01.559878 1775552 cri.go:89] found id: ""
	I0127 12:37:01.559909 1775552 logs.go:282] 0 containers: []
	W0127 12:37:01.559919 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:01.559933 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:01.559958 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:01.610420 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:01.610454 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:01.625224 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:01.625257 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:01.691710 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:01.691733 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:01.691748 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:01.764834 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:01.764879 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:04.315108 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:04.337323 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:04.337407 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:04.385468 1775552 cri.go:89] found id: ""
	I0127 12:37:04.385502 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.385515 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:04.385523 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:04.385593 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:04.430269 1775552 cri.go:89] found id: ""
	I0127 12:37:04.430307 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.430319 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:04.430330 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:04.430410 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:04.461357 1775552 cri.go:89] found id: ""
	I0127 12:37:04.461390 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.461402 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:04.461411 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:04.461485 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:04.497297 1775552 cri.go:89] found id: ""
	I0127 12:37:04.497333 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.497345 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:04.497353 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:04.497423 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:04.531666 1775552 cri.go:89] found id: ""
	I0127 12:37:04.531692 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.531703 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:04.531711 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:04.531786 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:04.576368 1775552 cri.go:89] found id: ""
	I0127 12:37:04.576403 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.576413 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:04.576420 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:04.576488 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:04.621269 1775552 cri.go:89] found id: ""
	I0127 12:37:04.621297 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.621305 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:04.621311 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:04.621364 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:04.656178 1775552 cri.go:89] found id: ""
	I0127 12:37:04.656209 1775552 logs.go:282] 0 containers: []
	W0127 12:37:04.656219 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:04.656231 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:04.656245 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:04.739779 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:04.739809 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:04.739823 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:04.823726 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:04.823769 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:04.868358 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:04.868388 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:04.917864 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:04.917903 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:07.432357 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:07.445759 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:07.445854 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:07.478385 1775552 cri.go:89] found id: ""
	I0127 12:37:07.478413 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.478421 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:07.478427 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:07.478485 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:07.515267 1775552 cri.go:89] found id: ""
	I0127 12:37:07.515289 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.515296 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:07.515302 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:07.515374 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:07.547263 1775552 cri.go:89] found id: ""
	I0127 12:37:07.547370 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.547390 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:07.547400 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:07.547470 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:07.578610 1775552 cri.go:89] found id: ""
	I0127 12:37:07.578643 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.578655 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:07.578663 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:07.578719 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:07.612306 1775552 cri.go:89] found id: ""
	I0127 12:37:07.612340 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.612352 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:07.612360 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:07.612435 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:07.645489 1775552 cri.go:89] found id: ""
	I0127 12:37:07.645532 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.645544 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:07.645552 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:07.645620 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:07.680219 1775552 cri.go:89] found id: ""
	I0127 12:37:07.680252 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.680263 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:07.680271 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:07.680341 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:07.719763 1775552 cri.go:89] found id: ""
	I0127 12:37:07.719791 1775552 logs.go:282] 0 containers: []
	W0127 12:37:07.719802 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:07.719811 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:07.719825 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:07.769066 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:07.769113 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:07.784173 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:07.784203 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:07.847590 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:07.847615 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:07.847630 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:07.920158 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:07.920192 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:10.455563 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:10.467564 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:10.467620 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:10.498941 1775552 cri.go:89] found id: ""
	I0127 12:37:10.498968 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.498975 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:10.498984 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:10.499034 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:10.530542 1775552 cri.go:89] found id: ""
	I0127 12:37:10.530569 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.530578 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:10.530584 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:10.530636 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:10.563424 1775552 cri.go:89] found id: ""
	I0127 12:37:10.563456 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.563466 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:10.563473 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:10.563531 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:10.594214 1775552 cri.go:89] found id: ""
	I0127 12:37:10.594251 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.594262 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:10.594271 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:10.594334 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:10.626542 1775552 cri.go:89] found id: ""
	I0127 12:37:10.626571 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.626579 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:10.626585 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:10.626646 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:10.661167 1775552 cri.go:89] found id: ""
	I0127 12:37:10.661190 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.661199 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:10.661204 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:10.661268 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:10.701545 1775552 cri.go:89] found id: ""
	I0127 12:37:10.701581 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.701593 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:10.701602 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:10.701678 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:10.741672 1775552 cri.go:89] found id: ""
	I0127 12:37:10.741704 1775552 logs.go:282] 0 containers: []
	W0127 12:37:10.741717 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:10.741731 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:10.741747 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:10.791649 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:10.791680 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:10.803848 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:10.803874 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:10.867934 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:10.867958 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:10.867972 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:10.939942 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:10.939981 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:13.480979 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:13.493645 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:13.493709 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:13.529486 1775552 cri.go:89] found id: ""
	I0127 12:37:13.529516 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.529526 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:13.529532 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:13.529584 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:13.560336 1775552 cri.go:89] found id: ""
	I0127 12:37:13.560360 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.560369 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:13.560378 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:13.560428 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:13.591705 1775552 cri.go:89] found id: ""
	I0127 12:37:13.591736 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.591744 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:13.591763 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:13.591857 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:13.624866 1775552 cri.go:89] found id: ""
	I0127 12:37:13.624890 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.624907 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:13.624915 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:13.624985 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:13.655827 1775552 cri.go:89] found id: ""
	I0127 12:37:13.655851 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.655862 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:13.655869 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:13.655937 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:13.687291 1775552 cri.go:89] found id: ""
	I0127 12:37:13.687324 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.687335 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:13.687343 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:13.687407 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:13.720682 1775552 cri.go:89] found id: ""
	I0127 12:37:13.720716 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.720727 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:13.720735 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:13.720821 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:13.751168 1775552 cri.go:89] found id: ""
	I0127 12:37:13.751200 1775552 logs.go:282] 0 containers: []
	W0127 12:37:13.751211 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:13.751230 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:13.751246 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:13.826897 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:13.826942 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:13.860602 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:13.860638 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:13.909217 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:13.909250 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:13.923561 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:13.923586 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:13.996346 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:16.496937 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:16.509165 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:16.509238 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:16.539821 1775552 cri.go:89] found id: ""
	I0127 12:37:16.539853 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.539861 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:16.539867 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:16.539928 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:16.569652 1775552 cri.go:89] found id: ""
	I0127 12:37:16.569678 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.569698 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:16.569704 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:16.569764 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:16.602626 1775552 cri.go:89] found id: ""
	I0127 12:37:16.602655 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.602667 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:16.602675 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:16.602729 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:16.632636 1775552 cri.go:89] found id: ""
	I0127 12:37:16.632664 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.632672 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:16.632678 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:16.632740 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:16.662670 1775552 cri.go:89] found id: ""
	I0127 12:37:16.662701 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.662713 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:16.662720 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:16.662807 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:16.693509 1775552 cri.go:89] found id: ""
	I0127 12:37:16.693534 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.693542 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:16.693548 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:16.693610 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:16.724506 1775552 cri.go:89] found id: ""
	I0127 12:37:16.724534 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.724542 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:16.724548 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:16.724607 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:16.755395 1775552 cri.go:89] found id: ""
	I0127 12:37:16.755423 1775552 logs.go:282] 0 containers: []
	W0127 12:37:16.755431 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:16.755442 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:16.755456 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:16.803021 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:16.803057 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:16.815709 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:16.815733 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:16.879451 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:16.879477 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:16.879495 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:16.956889 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:16.956927 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:19.496407 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:19.508779 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:19.508865 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:19.546828 1775552 cri.go:89] found id: ""
	I0127 12:37:19.546867 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.546877 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:19.546883 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:19.546948 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:19.579392 1775552 cri.go:89] found id: ""
	I0127 12:37:19.579420 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.579428 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:19.579434 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:19.579495 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:19.612791 1775552 cri.go:89] found id: ""
	I0127 12:37:19.612823 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.612833 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:19.612838 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:19.612901 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:19.643702 1775552 cri.go:89] found id: ""
	I0127 12:37:19.643734 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.643745 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:19.643752 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:19.643821 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:19.675480 1775552 cri.go:89] found id: ""
	I0127 12:37:19.675512 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.675524 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:19.675532 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:19.675599 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:19.715323 1775552 cri.go:89] found id: ""
	I0127 12:37:19.715361 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.715373 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:19.715385 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:19.715459 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:19.746388 1775552 cri.go:89] found id: ""
	I0127 12:37:19.746415 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.746424 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:19.746430 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:19.746498 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:19.777593 1775552 cri.go:89] found id: ""
	I0127 12:37:19.777626 1775552 logs.go:282] 0 containers: []
	W0127 12:37:19.777638 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:19.777653 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:19.777669 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:19.789885 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:19.789913 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:19.859722 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:19.859751 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:19.859766 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:19.933319 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:19.933348 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:19.974616 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:19.974646 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:22.526893 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:22.543448 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:22.543535 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:22.590206 1775552 cri.go:89] found id: ""
	I0127 12:37:22.590240 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.590251 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:22.590260 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:22.590331 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:22.637480 1775552 cri.go:89] found id: ""
	I0127 12:37:22.637510 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.637521 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:22.637528 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:22.637592 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:22.671717 1775552 cri.go:89] found id: ""
	I0127 12:37:22.671750 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.671762 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:22.671771 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:22.671844 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:22.706849 1775552 cri.go:89] found id: ""
	I0127 12:37:22.706883 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.706894 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:22.706909 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:22.706984 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:22.750947 1775552 cri.go:89] found id: ""
	I0127 12:37:22.750987 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.751000 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:22.751009 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:22.751081 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:22.785976 1775552 cri.go:89] found id: ""
	I0127 12:37:22.786006 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.786018 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:22.786027 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:22.786100 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:22.820650 1775552 cri.go:89] found id: ""
	I0127 12:37:22.820680 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.820689 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:22.820695 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:22.820746 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:22.854779 1775552 cri.go:89] found id: ""
	I0127 12:37:22.854812 1775552 logs.go:282] 0 containers: []
	W0127 12:37:22.854824 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:22.854838 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:22.854854 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:22.923605 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:22.923654 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:22.938107 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:22.938144 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:23.023476 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:23.023505 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:23.023522 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:23.121985 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:23.122037 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:25.672820 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:25.691909 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:25.691997 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:25.730842 1775552 cri.go:89] found id: ""
	I0127 12:37:25.730879 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.730891 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:25.730904 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:25.730984 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:25.777384 1775552 cri.go:89] found id: ""
	I0127 12:37:25.777415 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.777427 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:25.777435 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:25.777498 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:25.820039 1775552 cri.go:89] found id: ""
	I0127 12:37:25.820085 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.820097 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:25.820104 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:25.820180 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:25.859833 1775552 cri.go:89] found id: ""
	I0127 12:37:25.859865 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.859877 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:25.859885 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:25.859949 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:25.896663 1775552 cri.go:89] found id: ""
	I0127 12:37:25.896694 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.896705 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:25.896714 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:25.896775 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:25.932995 1775552 cri.go:89] found id: ""
	I0127 12:37:25.933036 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.933049 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:25.933061 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:25.933153 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:25.976017 1775552 cri.go:89] found id: ""
	I0127 12:37:25.976052 1775552 logs.go:282] 0 containers: []
	W0127 12:37:25.976064 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:25.976072 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:25.976153 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:26.012242 1775552 cri.go:89] found id: ""
	I0127 12:37:26.012269 1775552 logs.go:282] 0 containers: []
	W0127 12:37:26.012278 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:26.012293 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:26.012309 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:26.053056 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:26.053086 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:26.127069 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:26.127109 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:26.143528 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:26.143554 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:26.222459 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:26.222495 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:26.222512 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:28.800504 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:28.816138 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:28.816223 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:28.854340 1775552 cri.go:89] found id: ""
	I0127 12:37:28.854371 1775552 logs.go:282] 0 containers: []
	W0127 12:37:28.854382 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:28.854391 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:28.854458 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:28.896070 1775552 cri.go:89] found id: ""
	I0127 12:37:28.896099 1775552 logs.go:282] 0 containers: []
	W0127 12:37:28.896108 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:28.896113 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:28.896172 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:28.941528 1775552 cri.go:89] found id: ""
	I0127 12:37:28.941557 1775552 logs.go:282] 0 containers: []
	W0127 12:37:28.941568 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:28.941578 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:28.941673 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:28.980509 1775552 cri.go:89] found id: ""
	I0127 12:37:28.980544 1775552 logs.go:282] 0 containers: []
	W0127 12:37:28.980556 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:28.980565 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:28.980635 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:29.016150 1775552 cri.go:89] found id: ""
	I0127 12:37:29.016191 1775552 logs.go:282] 0 containers: []
	W0127 12:37:29.016203 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:29.016212 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:29.016281 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:29.050725 1775552 cri.go:89] found id: ""
	I0127 12:37:29.050774 1775552 logs.go:282] 0 containers: []
	W0127 12:37:29.050787 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:29.050796 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:29.050861 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:29.081060 1775552 cri.go:89] found id: ""
	I0127 12:37:29.081091 1775552 logs.go:282] 0 containers: []
	W0127 12:37:29.081102 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:29.081108 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:29.081186 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:29.121536 1775552 cri.go:89] found id: ""
	I0127 12:37:29.121574 1775552 logs.go:282] 0 containers: []
	W0127 12:37:29.121587 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:29.121601 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:29.121618 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:29.189348 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:29.189405 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:29.204958 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:29.204992 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:29.303882 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:29.303999 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:29.304036 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:29.428588 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:29.428643 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:31.979974 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:31.993952 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:31.994028 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:32.034378 1775552 cri.go:89] found id: ""
	I0127 12:37:32.034408 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.034420 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:32.034435 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:32.034502 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:32.067406 1775552 cri.go:89] found id: ""
	I0127 12:37:32.067442 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.067462 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:32.067471 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:32.067543 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:32.099521 1775552 cri.go:89] found id: ""
	I0127 12:37:32.099555 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.099576 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:32.099585 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:32.099698 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:32.139984 1775552 cri.go:89] found id: ""
	I0127 12:37:32.140019 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.140030 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:32.140037 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:32.140094 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:32.185057 1775552 cri.go:89] found id: ""
	I0127 12:37:32.185087 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.185096 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:32.185102 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:32.185156 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:32.223509 1775552 cri.go:89] found id: ""
	I0127 12:37:32.223546 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.223558 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:32.223567 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:32.223636 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:32.257985 1775552 cri.go:89] found id: ""
	I0127 12:37:32.258017 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.258029 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:32.258037 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:32.258108 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:32.295999 1775552 cri.go:89] found id: ""
	I0127 12:37:32.296030 1775552 logs.go:282] 0 containers: []
	W0127 12:37:32.296043 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:32.296057 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:32.296087 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:32.338724 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:32.338777 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:32.397839 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:32.397874 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:32.413958 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:32.413991 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:32.509039 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:32.509062 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:32.509078 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:35.095946 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:35.110905 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:35.110989 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:35.148888 1775552 cri.go:89] found id: ""
	I0127 12:37:35.148920 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.148931 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:35.148939 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:35.149010 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:35.195902 1775552 cri.go:89] found id: ""
	I0127 12:37:35.195937 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.195948 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:35.195956 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:35.196018 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:35.242674 1775552 cri.go:89] found id: ""
	I0127 12:37:35.242712 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.242723 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:35.242731 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:35.242817 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:35.292131 1775552 cri.go:89] found id: ""
	I0127 12:37:35.292166 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.292178 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:35.292187 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:35.292254 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:35.338178 1775552 cri.go:89] found id: ""
	I0127 12:37:35.338213 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.338225 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:35.338233 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:35.338299 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:35.385042 1775552 cri.go:89] found id: ""
	I0127 12:37:35.385074 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.385086 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:35.385095 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:35.385165 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:35.430571 1775552 cri.go:89] found id: ""
	I0127 12:37:35.430607 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.430620 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:35.430629 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:35.430714 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:35.475708 1775552 cri.go:89] found id: ""
	I0127 12:37:35.475753 1775552 logs.go:282] 0 containers: []
	W0127 12:37:35.475764 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:35.475778 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:35.475798 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:35.493534 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:35.493574 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:35.580727 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:35.580759 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:35.580778 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:35.664145 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:35.664191 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:35.702978 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:35.703007 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:38.259383 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:38.275110 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:38.275171 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:38.307902 1775552 cri.go:89] found id: ""
	I0127 12:37:38.307934 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.307946 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:38.307954 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:38.308026 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:38.343373 1775552 cri.go:89] found id: ""
	I0127 12:37:38.343399 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.343407 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:38.343414 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:38.343467 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:38.379157 1775552 cri.go:89] found id: ""
	I0127 12:37:38.379190 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.379199 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:38.379205 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:38.379274 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:38.414918 1775552 cri.go:89] found id: ""
	I0127 12:37:38.414953 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.414966 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:38.414976 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:38.415056 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:38.455883 1775552 cri.go:89] found id: ""
	I0127 12:37:38.455916 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.455928 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:38.455937 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:38.456015 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:38.493622 1775552 cri.go:89] found id: ""
	I0127 12:37:38.493710 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.493727 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:38.493736 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:38.493802 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:38.526019 1775552 cri.go:89] found id: ""
	I0127 12:37:38.526056 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.526072 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:38.526081 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:38.526152 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:38.560739 1775552 cri.go:89] found id: ""
	I0127 12:37:38.560771 1775552 logs.go:282] 0 containers: []
	W0127 12:37:38.560779 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:38.560790 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:38.560803 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:38.616922 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:38.616962 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:38.629492 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:38.629526 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:38.699617 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:38.699651 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:38.699668 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:38.780562 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:38.780604 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:41.319789 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:41.334385 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:41.334470 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:41.377716 1775552 cri.go:89] found id: ""
	I0127 12:37:41.377748 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.377759 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:41.377767 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:41.377837 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:41.415597 1775552 cri.go:89] found id: ""
	I0127 12:37:41.415630 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.415642 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:41.415650 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:41.415713 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:41.456225 1775552 cri.go:89] found id: ""
	I0127 12:37:41.456256 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.456266 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:41.456274 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:41.456329 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:41.495130 1775552 cri.go:89] found id: ""
	I0127 12:37:41.495153 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.495162 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:41.495170 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:41.495222 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:41.532050 1775552 cri.go:89] found id: ""
	I0127 12:37:41.532079 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.532090 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:41.532098 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:41.532161 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:41.567943 1775552 cri.go:89] found id: ""
	I0127 12:37:41.567983 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.567995 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:41.568004 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:41.568077 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:41.604234 1775552 cri.go:89] found id: ""
	I0127 12:37:41.604267 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.604279 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:41.604288 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:41.604359 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:41.642545 1775552 cri.go:89] found id: ""
	I0127 12:37:41.642574 1775552 logs.go:282] 0 containers: []
	W0127 12:37:41.642585 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:41.642598 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:41.642613 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:41.705121 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:41.705152 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:41.722150 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:41.722192 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:41.815576 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:41.815606 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:41.815626 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:41.929931 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:41.929978 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:44.474928 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:44.490201 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:44.490264 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:44.522943 1775552 cri.go:89] found id: ""
	I0127 12:37:44.522980 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.522994 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:44.523004 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:44.523081 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:44.566325 1775552 cri.go:89] found id: ""
	I0127 12:37:44.566350 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.566358 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:44.566364 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:44.566421 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:44.610155 1775552 cri.go:89] found id: ""
	I0127 12:37:44.610190 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.610202 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:44.610210 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:44.610279 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:44.663602 1775552 cri.go:89] found id: ""
	I0127 12:37:44.663637 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.663651 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:44.663659 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:44.663732 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:44.695700 1775552 cri.go:89] found id: ""
	I0127 12:37:44.695735 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.695747 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:44.695755 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:44.695827 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:44.727429 1775552 cri.go:89] found id: ""
	I0127 12:37:44.727461 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.727474 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:44.727483 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:44.727548 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:44.763533 1775552 cri.go:89] found id: ""
	I0127 12:37:44.763572 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.763585 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:44.763594 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:44.763671 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:44.805061 1775552 cri.go:89] found id: ""
	I0127 12:37:44.805097 1775552 logs.go:282] 0 containers: []
	W0127 12:37:44.805110 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:44.805126 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:44.805143 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:44.820565 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:44.820589 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:44.888209 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:44.888239 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:44.888252 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:44.968378 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:44.968423 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:45.020754 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:45.020788 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:47.574896 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:47.589844 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:47.589927 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:47.630284 1775552 cri.go:89] found id: ""
	I0127 12:37:47.630313 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.630330 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:47.630339 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:47.630413 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:47.662762 1775552 cri.go:89] found id: ""
	I0127 12:37:47.662794 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.662803 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:47.662810 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:47.662902 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:47.703030 1775552 cri.go:89] found id: ""
	I0127 12:37:47.703071 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.703082 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:47.703100 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:47.703181 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:47.739301 1775552 cri.go:89] found id: ""
	I0127 12:37:47.739330 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.739342 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:47.739356 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:47.739414 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:47.780874 1775552 cri.go:89] found id: ""
	I0127 12:37:47.780905 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.780917 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:47.780925 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:47.780996 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:47.813008 1775552 cri.go:89] found id: ""
	I0127 12:37:47.813043 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.813054 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:47.813063 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:47.813130 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:47.850378 1775552 cri.go:89] found id: ""
	I0127 12:37:47.850409 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.850421 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:47.850428 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:47.850493 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:47.880178 1775552 cri.go:89] found id: ""
	I0127 12:37:47.880210 1775552 logs.go:282] 0 containers: []
	W0127 12:37:47.880221 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:47.880236 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:47.880255 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:47.948086 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:47.948139 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:47.961400 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:47.961434 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:48.038188 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:48.038209 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:48.038224 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:48.126901 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:48.126956 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:50.666656 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:50.680779 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:50.680879 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:50.714143 1775552 cri.go:89] found id: ""
	I0127 12:37:50.714178 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.714186 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:50.714192 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:50.714244 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:50.746901 1775552 cri.go:89] found id: ""
	I0127 12:37:50.746929 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.746939 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:50.746946 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:50.747015 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:50.777299 1775552 cri.go:89] found id: ""
	I0127 12:37:50.777335 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.777343 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:50.777353 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:50.777407 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:50.808823 1775552 cri.go:89] found id: ""
	I0127 12:37:50.808864 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.808875 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:50.808886 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:50.808964 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:50.839306 1775552 cri.go:89] found id: ""
	I0127 12:37:50.839334 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.839343 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:50.839349 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:50.839417 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:50.870257 1775552 cri.go:89] found id: ""
	I0127 12:37:50.870283 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.870290 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:50.870297 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:50.870359 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:50.899245 1775552 cri.go:89] found id: ""
	I0127 12:37:50.899271 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.899279 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:50.899285 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:50.899346 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:50.930156 1775552 cri.go:89] found id: ""
	I0127 12:37:50.930187 1775552 logs.go:282] 0 containers: []
	W0127 12:37:50.930198 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:50.930209 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:50.930224 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:50.978701 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:50.978732 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:50.992061 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:50.992097 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:51.061244 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:51.061271 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:51.061288 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:51.132757 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:51.132791 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:53.669835 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:53.683156 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:53.683227 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:53.720463 1775552 cri.go:89] found id: ""
	I0127 12:37:53.720494 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.720503 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:53.720511 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:53.720577 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:53.758470 1775552 cri.go:89] found id: ""
	I0127 12:37:53.758500 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.758513 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:53.758521 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:53.758583 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:53.794720 1775552 cri.go:89] found id: ""
	I0127 12:37:53.794766 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.794777 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:53.794785 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:53.794836 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:53.829853 1775552 cri.go:89] found id: ""
	I0127 12:37:53.829883 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.829892 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:53.829898 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:53.829952 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:53.861243 1775552 cri.go:89] found id: ""
	I0127 12:37:53.861280 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.861291 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:53.861299 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:53.861386 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:53.893779 1775552 cri.go:89] found id: ""
	I0127 12:37:53.893813 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.893825 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:53.893834 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:53.893907 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:53.932290 1775552 cri.go:89] found id: ""
	I0127 12:37:53.932320 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.932332 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:53.932340 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:53.932397 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:53.962718 1775552 cri.go:89] found id: ""
	I0127 12:37:53.962777 1775552 logs.go:282] 0 containers: []
	W0127 12:37:53.962788 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:53.962798 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:53.962810 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:54.020392 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:54.020436 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:54.034589 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:54.034624 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:54.105683 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:54.105714 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:54.105730 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:54.189103 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:54.189141 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:56.730257 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:56.746490 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:56.746573 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:56.780313 1775552 cri.go:89] found id: ""
	I0127 12:37:56.780344 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.780356 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:56.780363 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:56.780421 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:56.811889 1775552 cri.go:89] found id: ""
	I0127 12:37:56.811924 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.811936 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:56.811944 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:56.811999 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:56.851035 1775552 cri.go:89] found id: ""
	I0127 12:37:56.851067 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.851078 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:56.851087 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:56.851156 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:56.892274 1775552 cri.go:89] found id: ""
	I0127 12:37:56.892304 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.892315 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:56.892322 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:56.892380 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:37:56.926696 1775552 cri.go:89] found id: ""
	I0127 12:37:56.926722 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.926733 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:37:56.926758 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:37:56.926814 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:37:56.959758 1775552 cri.go:89] found id: ""
	I0127 12:37:56.959783 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.959794 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:37:56.959802 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:37:56.959859 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:37:56.993565 1775552 cri.go:89] found id: ""
	I0127 12:37:56.993591 1775552 logs.go:282] 0 containers: []
	W0127 12:37:56.993598 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:37:56.993604 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:37:56.993658 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:37:57.026576 1775552 cri.go:89] found id: ""
	I0127 12:37:57.026609 1775552 logs.go:282] 0 containers: []
	W0127 12:37:57.026620 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:37:57.026634 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:37:57.026648 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:37:57.076295 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:37:57.076330 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:37:57.089003 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:37:57.089029 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:37:57.160758 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:37:57.160782 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:37:57.160796 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:37:57.235168 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:37:57.235199 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:37:59.779541 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:37:59.792105 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:37:59.792199 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:37:59.836833 1775552 cri.go:89] found id: ""
	I0127 12:37:59.836869 1775552 logs.go:282] 0 containers: []
	W0127 12:37:59.836880 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:37:59.836886 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:37:59.836955 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:37:59.877566 1775552 cri.go:89] found id: ""
	I0127 12:37:59.877593 1775552 logs.go:282] 0 containers: []
	W0127 12:37:59.877603 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:37:59.877607 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:37:59.877670 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:37:59.919155 1775552 cri.go:89] found id: ""
	I0127 12:37:59.919196 1775552 logs.go:282] 0 containers: []
	W0127 12:37:59.919209 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:37:59.919218 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:37:59.919331 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:37:59.962585 1775552 cri.go:89] found id: ""
	I0127 12:37:59.962619 1775552 logs.go:282] 0 containers: []
	W0127 12:37:59.962629 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:37:59.962636 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:37:59.962708 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:38:00.005600 1775552 cri.go:89] found id: ""
	I0127 12:38:00.005635 1775552 logs.go:282] 0 containers: []
	W0127 12:38:00.005649 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:38:00.005657 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:38:00.005732 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:38:00.041893 1775552 cri.go:89] found id: ""
	I0127 12:38:00.041927 1775552 logs.go:282] 0 containers: []
	W0127 12:38:00.041939 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:38:00.041947 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:38:00.042010 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:38:00.079375 1775552 cri.go:89] found id: ""
	I0127 12:38:00.079412 1775552 logs.go:282] 0 containers: []
	W0127 12:38:00.079424 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:38:00.079431 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:38:00.079502 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:38:00.124572 1775552 cri.go:89] found id: ""
	I0127 12:38:00.124603 1775552 logs.go:282] 0 containers: []
	W0127 12:38:00.124615 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:38:00.124628 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:38:00.124647 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:38:00.170623 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:38:00.170657 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:38:00.233104 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:38:00.233151 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:38:00.250211 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:38:00.250247 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:38:00.335562 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:38:00.335589 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:38:00.335609 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:38:02.926913 1775552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:38:02.940231 1775552 kubeadm.go:597] duration metric: took 4m4.63565701s to restartPrimaryControlPlane
	W0127 12:38:02.940311 1775552 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:38:02.940332 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:38:06.954293 1775552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.01393895s)
	I0127 12:38:06.954363 1775552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:38:06.967748 1775552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:38:06.977205 1775552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:38:06.986562 1775552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:38:06.986583 1775552 kubeadm.go:157] found existing configuration files:
	
	I0127 12:38:06.986633 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:38:06.996049 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:38:06.996098 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:38:07.005883 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:38:07.015350 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:38:07.015391 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:38:07.025067 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:38:07.033779 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:38:07.033850 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:38:07.043717 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:38:07.051645 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:38:07.051687 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:38:07.059888 1775552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:38:07.131642 1775552 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:38:07.131712 1775552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:38:07.271606 1775552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:38:07.271752 1775552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:38:07.271935 1775552 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:38:07.471247 1775552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:38:07.472900 1775552 out.go:235]   - Generating certificates and keys ...
	I0127 12:38:07.472996 1775552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:38:07.473048 1775552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:38:07.473134 1775552 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:38:07.473231 1775552 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:38:07.473304 1775552 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:38:07.473376 1775552 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:38:07.473465 1775552 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:38:07.473533 1775552 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:38:07.473625 1775552 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:38:07.473739 1775552 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:38:07.473796 1775552 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:38:07.473877 1775552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:38:07.817511 1775552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:38:07.937054 1775552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:38:08.454571 1775552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:38:08.565567 1775552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:38:08.580959 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:38:08.582017 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:38:08.582094 1775552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:38:08.724546 1775552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:38:08.726722 1775552 out.go:235]   - Booting up control plane ...
	I0127 12:38:08.726877 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:38:08.731355 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:38:08.732239 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:38:08.733026 1775552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:38:08.744329 1775552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:38:48.742041 1775552 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:38:48.742800 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:38:48.743030 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:38:53.743312 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:38:53.743544 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:39:03.743936 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:39:03.744206 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:39:23.744719 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:39:23.744937 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:40:03.746671 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:40:03.747063 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:40:03.747088 1775552 kubeadm.go:310] 
	I0127 12:40:03.747144 1775552 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:40:03.747200 1775552 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:40:03.747210 1775552 kubeadm.go:310] 
	I0127 12:40:03.747267 1775552 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:40:03.747321 1775552 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:40:03.747409 1775552 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:40:03.747417 1775552 kubeadm.go:310] 
	I0127 12:40:03.747531 1775552 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:40:03.747574 1775552 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:40:03.747614 1775552 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:40:03.747626 1775552 kubeadm.go:310] 
	I0127 12:40:03.747760 1775552 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:40:03.747879 1775552 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:40:03.747899 1775552 kubeadm.go:310] 
	I0127 12:40:03.748062 1775552 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:40:03.748180 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:40:03.748289 1775552 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:40:03.748357 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:40:03.748366 1775552 kubeadm.go:310] 
	I0127 12:40:03.749261 1775552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:40:03.749399 1775552 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:40:03.749529 1775552 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 12:40:03.749710 1775552 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 12:40:03.749788 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 12:40:04.205262 1775552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:40:04.218853 1775552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:40:04.229442 1775552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:40:04.229464 1775552 kubeadm.go:157] found existing configuration files:
	
	I0127 12:40:04.229510 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:40:04.239586 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:40:04.239649 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:40:04.248755 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:40:04.257541 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:40:04.257602 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:40:04.266272 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:40:04.274689 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:40:04.274751 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:40:04.283029 1775552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:40:04.291245 1775552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:40:04.291287 1775552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:40:04.299257 1775552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:40:04.511225 1775552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:42:00.380790 1775552 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:42:00.380903 1775552 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 12:42:00.382723 1775552 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:42:00.382825 1775552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:42:00.382928 1775552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:42:00.383106 1775552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:42:00.383236 1775552 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:42:00.383341 1775552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:42:00.384612 1775552 out.go:235]   - Generating certificates and keys ...
	I0127 12:42:00.384705 1775552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:42:00.384794 1775552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:42:00.384897 1775552 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:42:00.384993 1775552 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:42:00.385096 1775552 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:42:00.385180 1775552 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:42:00.385241 1775552 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:42:00.385327 1775552 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:42:00.385449 1775552 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:42:00.385554 1775552 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:42:00.385611 1775552 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:42:00.385698 1775552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:42:00.385749 1775552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:42:00.385795 1775552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:42:00.385863 1775552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:42:00.385911 1775552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:42:00.386003 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:42:00.386092 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:42:00.386165 1775552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:42:00.386259 1775552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:42:00.387546 1775552 out.go:235]   - Booting up control plane ...
	I0127 12:42:00.387659 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:42:00.387750 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:42:00.387858 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:42:00.388007 1775552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:42:00.388246 1775552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:42:00.388314 1775552 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:42:00.388419 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.388584 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.388643 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.388794 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.388855 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389006 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389064 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389217 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389276 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389478 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389487 1775552 kubeadm.go:310] 
	I0127 12:42:00.389522 1775552 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:42:00.389557 1775552 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:42:00.389563 1775552 kubeadm.go:310] 
	I0127 12:42:00.389616 1775552 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:42:00.389659 1775552 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:42:00.389796 1775552 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:42:00.389805 1775552 kubeadm.go:310] 
	I0127 12:42:00.389893 1775552 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:42:00.389923 1775552 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:42:00.389951 1775552 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:42:00.389957 1775552 kubeadm.go:310] 
	I0127 12:42:00.390040 1775552 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:42:00.390111 1775552 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:42:00.390117 1775552 kubeadm.go:310] 
	I0127 12:42:00.390238 1775552 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:42:00.390344 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:42:00.390433 1775552 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:42:00.390521 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:42:00.390600 1775552 kubeadm.go:394] duration metric: took 8m2.1292845s to StartCluster
	I0127 12:42:00.390672 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:42:00.390740 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:42:00.390808 1775552 kubeadm.go:310] 
	I0127 12:42:00.434697 1775552 cri.go:89] found id: ""
	I0127 12:42:00.434734 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.434758 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:42:00.434768 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:42:00.434839 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:42:00.468250 1775552 cri.go:89] found id: ""
	I0127 12:42:00.468283 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.468296 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:42:00.468304 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:42:00.468379 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:42:00.501136 1775552 cri.go:89] found id: ""
	I0127 12:42:00.501171 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.501183 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:42:00.501191 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:42:00.501267 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:42:00.534246 1775552 cri.go:89] found id: ""
	I0127 12:42:00.534293 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.534305 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:42:00.534313 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:42:00.534374 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:42:00.569901 1775552 cri.go:89] found id: ""
	I0127 12:42:00.569938 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.569951 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:42:00.569959 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:42:00.570023 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:42:00.607468 1775552 cri.go:89] found id: ""
	I0127 12:42:00.607499 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.607511 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:42:00.607519 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:42:00.607584 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:42:00.640103 1775552 cri.go:89] found id: ""
	I0127 12:42:00.640143 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.640156 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:42:00.640165 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:42:00.640241 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:42:00.673571 1775552 cri.go:89] found id: ""
	I0127 12:42:00.673610 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.673624 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:42:00.673640 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:42:00.673661 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:42:00.689315 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:42:00.689362 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:42:00.766937 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:42:00.766970 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:42:00.767002 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:42:00.900474 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:42:00.900514 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:42:00.939400 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:42:00.939441 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:42:01.005217 1775552 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 12:42:01.005274 1775552 out.go:270] * 
	* 
	W0127 12:42:01.005350 1775552 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:42:01.005369 1775552 out.go:270] * 
	* 
	W0127 12:42:01.006281 1775552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:42:01.009438 1775552 out.go:201] 
	W0127 12:42:01.010472 1775552 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:42:01.010517 1775552 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 12:42:01.010560 1775552 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 12:42:01.011726 1775552 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-488586 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (254.143343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-488586 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo docker                        | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo cat                           | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo                               | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo find                          | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-956477 sudo crio                          | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-956477                                    | kindnet-956477 | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC | 27 Jan 25 12:41 UTC |
	| start   | -p calico-956477 --memory=3072                       | calico-956477  | jenkins | v1.35.0 | 27 Jan 25 12:41 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:41:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:41:51.007277 1782105 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:41:51.007417 1782105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:41:51.007429 1782105 out.go:358] Setting ErrFile to fd 2...
	I0127 12:41:51.007436 1782105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:41:51.007704 1782105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:41:51.008498 1782105 out.go:352] Setting JSON to false
	I0127 12:41:51.009961 1782105 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33852,"bootTime":1737947859,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:41:51.010063 1782105 start.go:139] virtualization: kvm guest
	I0127 12:41:51.012148 1782105 out.go:177] * [calico-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:41:51.013249 1782105 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:41:51.013302 1782105 notify.go:220] Checking for updates...
	I0127 12:41:51.015230 1782105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:41:51.016371 1782105 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:41:51.017471 1782105 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:41:51.018445 1782105 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:41:51.019444 1782105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:41:51.020827 1782105 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:41:51.020952 1782105 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:41:51.021053 1782105 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:41:51.021146 1782105 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:41:51.057420 1782105 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:41:51.058681 1782105 start.go:297] selected driver: kvm2
	I0127 12:41:51.058694 1782105 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:41:51.058704 1782105 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:41:51.059651 1782105 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:41:51.059732 1782105 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:41:51.075013 1782105 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:41:51.075073 1782105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:41:51.075306 1782105 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:41:51.075337 1782105 cni.go:84] Creating CNI manager for "calico"
	I0127 12:41:51.075341 1782105 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0127 12:41:51.075405 1782105 start.go:340] cluster config:
	{Name:calico-956477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:41:51.075509 1782105 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:41:51.076855 1782105 out.go:177] * Starting "calico-956477" primary control-plane node in "calico-956477" cluster
	I0127 12:41:51.077813 1782105 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:41:51.077842 1782105 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:41:51.077852 1782105 cache.go:56] Caching tarball of preloaded images
	I0127 12:41:51.077931 1782105 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:41:51.077941 1782105 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:41:51.078024 1782105 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/config.json ...
	I0127 12:41:51.078040 1782105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/config.json: {Name:mk008ae61833c0687eba13aa8cfb9b68ea50d529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:41:51.078187 1782105 start.go:360] acquireMachinesLock for calico-956477: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:41:51.078216 1782105 start.go:364] duration metric: took 16.44µs to acquireMachinesLock for "calico-956477"
	I0127 12:41:51.078232 1782105 start.go:93] Provisioning new machine with config: &{Name:calico-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-956477 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:41:51.078284 1782105 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:41:51.079486 1782105 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:41:51.079607 1782105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:41:51.079642 1782105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:41:51.093771 1782105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0127 12:41:51.094247 1782105 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:41:51.094791 1782105 main.go:141] libmachine: Using API Version  1
	I0127 12:41:51.094815 1782105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:41:51.095131 1782105 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:41:51.095331 1782105 main.go:141] libmachine: (calico-956477) Calling .GetMachineName
	I0127 12:41:51.095486 1782105 main.go:141] libmachine: (calico-956477) Calling .DriverName
	I0127 12:41:51.095635 1782105 start.go:159] libmachine.API.Create for "calico-956477" (driver="kvm2")
	I0127 12:41:51.095682 1782105 client.go:168] LocalClient.Create starting
	I0127 12:41:51.095715 1782105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:41:51.095746 1782105 main.go:141] libmachine: Decoding PEM data...
	I0127 12:41:51.095766 1782105 main.go:141] libmachine: Parsing certificate...
	I0127 12:41:51.095815 1782105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:41:51.095836 1782105 main.go:141] libmachine: Decoding PEM data...
	I0127 12:41:51.095847 1782105 main.go:141] libmachine: Parsing certificate...
	I0127 12:41:51.095860 1782105 main.go:141] libmachine: Running pre-create checks...
	I0127 12:41:51.095868 1782105 main.go:141] libmachine: (calico-956477) Calling .PreCreateCheck
	I0127 12:41:51.096197 1782105 main.go:141] libmachine: (calico-956477) Calling .GetConfigRaw
	I0127 12:41:51.096569 1782105 main.go:141] libmachine: Creating machine...
	I0127 12:41:51.096582 1782105 main.go:141] libmachine: (calico-956477) Calling .Create
	I0127 12:41:51.096721 1782105 main.go:141] libmachine: (calico-956477) creating KVM machine...
	I0127 12:41:51.096732 1782105 main.go:141] libmachine: (calico-956477) creating network...
	I0127 12:41:51.097798 1782105 main.go:141] libmachine: (calico-956477) DBG | found existing default KVM network
	I0127 12:41:51.099039 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.098873 1782128 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:41:51.099889 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.099820 1782128 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:0f:53} reservation:<nil>}
	I0127 12:41:51.100759 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.100667 1782128 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:57:68} reservation:<nil>}
	I0127 12:41:51.101875 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.101808 1782128 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000328fc0}
	I0127 12:41:51.101917 1782105 main.go:141] libmachine: (calico-956477) DBG | created network xml: 
	I0127 12:41:51.101939 1782105 main.go:141] libmachine: (calico-956477) DBG | <network>
	I0127 12:41:51.101952 1782105 main.go:141] libmachine: (calico-956477) DBG |   <name>mk-calico-956477</name>
	I0127 12:41:51.101966 1782105 main.go:141] libmachine: (calico-956477) DBG |   <dns enable='no'/>
	I0127 12:41:51.101979 1782105 main.go:141] libmachine: (calico-956477) DBG |   
	I0127 12:41:51.102003 1782105 main.go:141] libmachine: (calico-956477) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 12:41:51.102011 1782105 main.go:141] libmachine: (calico-956477) DBG |     <dhcp>
	I0127 12:41:51.102016 1782105 main.go:141] libmachine: (calico-956477) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 12:41:51.102021 1782105 main.go:141] libmachine: (calico-956477) DBG |     </dhcp>
	I0127 12:41:51.102027 1782105 main.go:141] libmachine: (calico-956477) DBG |   </ip>
	I0127 12:41:51.102031 1782105 main.go:141] libmachine: (calico-956477) DBG |   
	I0127 12:41:51.102037 1782105 main.go:141] libmachine: (calico-956477) DBG | </network>
	I0127 12:41:51.102086 1782105 main.go:141] libmachine: (calico-956477) DBG | 
	I0127 12:41:51.107054 1782105 main.go:141] libmachine: (calico-956477) DBG | trying to create private KVM network mk-calico-956477 192.168.72.0/24...
	I0127 12:41:51.179133 1782105 main.go:141] libmachine: (calico-956477) DBG | private KVM network mk-calico-956477 192.168.72.0/24 created
	I0127 12:41:51.179173 1782105 main.go:141] libmachine: (calico-956477) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477 ...
	I0127 12:41:51.179187 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.179100 1782128 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:41:51.179209 1782105 main.go:141] libmachine: (calico-956477) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:41:51.179236 1782105 main.go:141] libmachine: (calico-956477) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:41:51.482927 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.482785 1782128 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477/id_rsa...
	I0127 12:41:51.570513 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.570386 1782128 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477/calico-956477.rawdisk...
	I0127 12:41:51.570547 1782105 main.go:141] libmachine: (calico-956477) DBG | Writing magic tar header
	I0127 12:41:51.570559 1782105 main.go:141] libmachine: (calico-956477) DBG | Writing SSH key tar header
	I0127 12:41:51.570567 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:51.570504 1782128 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477 ...
	I0127 12:41:51.570620 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477
	I0127 12:41:51.570712 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477 (perms=drwx------)
	I0127 12:41:51.570738 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:41:51.570769 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:41:51.570790 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:41:51.570799 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:41:51.570808 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:41:51.570816 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home/jenkins
	I0127 12:41:51.570826 1782105 main.go:141] libmachine: (calico-956477) DBG | checking permissions on dir: /home
	I0127 12:41:51.570837 1782105 main.go:141] libmachine: (calico-956477) DBG | skipping /home - not owner
	I0127 12:41:51.570858 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:41:51.570879 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:41:51.570900 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:41:51.570912 1782105 main.go:141] libmachine: (calico-956477) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:41:51.570923 1782105 main.go:141] libmachine: (calico-956477) creating domain...
	I0127 12:41:51.571898 1782105 main.go:141] libmachine: (calico-956477) define libvirt domain using xml: 
	I0127 12:41:51.571918 1782105 main.go:141] libmachine: (calico-956477) <domain type='kvm'>
	I0127 12:41:51.571925 1782105 main.go:141] libmachine: (calico-956477)   <name>calico-956477</name>
	I0127 12:41:51.571930 1782105 main.go:141] libmachine: (calico-956477)   <memory unit='MiB'>3072</memory>
	I0127 12:41:51.571937 1782105 main.go:141] libmachine: (calico-956477)   <vcpu>2</vcpu>
	I0127 12:41:51.571953 1782105 main.go:141] libmachine: (calico-956477)   <features>
	I0127 12:41:51.571966 1782105 main.go:141] libmachine: (calico-956477)     <acpi/>
	I0127 12:41:51.571971 1782105 main.go:141] libmachine: (calico-956477)     <apic/>
	I0127 12:41:51.571981 1782105 main.go:141] libmachine: (calico-956477)     <pae/>
	I0127 12:41:51.571986 1782105 main.go:141] libmachine: (calico-956477)     
	I0127 12:41:51.571996 1782105 main.go:141] libmachine: (calico-956477)   </features>
	I0127 12:41:51.572003 1782105 main.go:141] libmachine: (calico-956477)   <cpu mode='host-passthrough'>
	I0127 12:41:51.572012 1782105 main.go:141] libmachine: (calico-956477)   
	I0127 12:41:51.572016 1782105 main.go:141] libmachine: (calico-956477)   </cpu>
	I0127 12:41:51.572055 1782105 main.go:141] libmachine: (calico-956477)   <os>
	I0127 12:41:51.572079 1782105 main.go:141] libmachine: (calico-956477)     <type>hvm</type>
	I0127 12:41:51.572090 1782105 main.go:141] libmachine: (calico-956477)     <boot dev='cdrom'/>
	I0127 12:41:51.572098 1782105 main.go:141] libmachine: (calico-956477)     <boot dev='hd'/>
	I0127 12:41:51.572108 1782105 main.go:141] libmachine: (calico-956477)     <bootmenu enable='no'/>
	I0127 12:41:51.572116 1782105 main.go:141] libmachine: (calico-956477)   </os>
	I0127 12:41:51.572122 1782105 main.go:141] libmachine: (calico-956477)   <devices>
	I0127 12:41:51.572133 1782105 main.go:141] libmachine: (calico-956477)     <disk type='file' device='cdrom'>
	I0127 12:41:51.572157 1782105 main.go:141] libmachine: (calico-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477/boot2docker.iso'/>
	I0127 12:41:51.572171 1782105 main.go:141] libmachine: (calico-956477)       <target dev='hdc' bus='scsi'/>
	I0127 12:41:51.572183 1782105 main.go:141] libmachine: (calico-956477)       <readonly/>
	I0127 12:41:51.572190 1782105 main.go:141] libmachine: (calico-956477)     </disk>
	I0127 12:41:51.572207 1782105 main.go:141] libmachine: (calico-956477)     <disk type='file' device='disk'>
	I0127 12:41:51.572220 1782105 main.go:141] libmachine: (calico-956477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:41:51.572236 1782105 main.go:141] libmachine: (calico-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/calico-956477/calico-956477.rawdisk'/>
	I0127 12:41:51.572252 1782105 main.go:141] libmachine: (calico-956477)       <target dev='hda' bus='virtio'/>
	I0127 12:41:51.572263 1782105 main.go:141] libmachine: (calico-956477)     </disk>
	I0127 12:41:51.572274 1782105 main.go:141] libmachine: (calico-956477)     <interface type='network'>
	I0127 12:41:51.572286 1782105 main.go:141] libmachine: (calico-956477)       <source network='mk-calico-956477'/>
	I0127 12:41:51.572293 1782105 main.go:141] libmachine: (calico-956477)       <model type='virtio'/>
	I0127 12:41:51.572305 1782105 main.go:141] libmachine: (calico-956477)     </interface>
	I0127 12:41:51.572315 1782105 main.go:141] libmachine: (calico-956477)     <interface type='network'>
	I0127 12:41:51.572335 1782105 main.go:141] libmachine: (calico-956477)       <source network='default'/>
	I0127 12:41:51.572355 1782105 main.go:141] libmachine: (calico-956477)       <model type='virtio'/>
	I0127 12:41:51.572368 1782105 main.go:141] libmachine: (calico-956477)     </interface>
	I0127 12:41:51.572379 1782105 main.go:141] libmachine: (calico-956477)     <serial type='pty'>
	I0127 12:41:51.572388 1782105 main.go:141] libmachine: (calico-956477)       <target port='0'/>
	I0127 12:41:51.572402 1782105 main.go:141] libmachine: (calico-956477)     </serial>
	I0127 12:41:51.572420 1782105 main.go:141] libmachine: (calico-956477)     <console type='pty'>
	I0127 12:41:51.572439 1782105 main.go:141] libmachine: (calico-956477)       <target type='serial' port='0'/>
	I0127 12:41:51.572450 1782105 main.go:141] libmachine: (calico-956477)     </console>
	I0127 12:41:51.572461 1782105 main.go:141] libmachine: (calico-956477)     <rng model='virtio'>
	I0127 12:41:51.572471 1782105 main.go:141] libmachine: (calico-956477)       <backend model='random'>/dev/random</backend>
	I0127 12:41:51.572481 1782105 main.go:141] libmachine: (calico-956477)     </rng>
	I0127 12:41:51.572488 1782105 main.go:141] libmachine: (calico-956477)     
	I0127 12:41:51.572496 1782105 main.go:141] libmachine: (calico-956477)     
	I0127 12:41:51.572536 1782105 main.go:141] libmachine: (calico-956477)   </devices>
	I0127 12:41:51.572572 1782105 main.go:141] libmachine: (calico-956477) </domain>
	I0127 12:41:51.572591 1782105 main.go:141] libmachine: (calico-956477) 
	I0127 12:41:51.576686 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:8e:b7:10 in network default
	I0127 12:41:51.577277 1782105 main.go:141] libmachine: (calico-956477) starting domain...
	I0127 12:41:51.577298 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:51.577303 1782105 main.go:141] libmachine: (calico-956477) ensuring networks are active...
	I0127 12:41:51.577877 1782105 main.go:141] libmachine: (calico-956477) Ensuring network default is active
	I0127 12:41:51.578191 1782105 main.go:141] libmachine: (calico-956477) Ensuring network mk-calico-956477 is active
	I0127 12:41:51.578658 1782105 main.go:141] libmachine: (calico-956477) getting domain XML...
	I0127 12:41:51.579365 1782105 main.go:141] libmachine: (calico-956477) creating domain...
	I0127 12:41:52.820849 1782105 main.go:141] libmachine: (calico-956477) waiting for IP...
	I0127 12:41:52.821706 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:52.822139 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:52.822211 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:52.822162 1782128 retry.go:31] will retry after 264.932186ms: waiting for domain to come up
	I0127 12:41:53.088651 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:53.089166 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:53.089228 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:53.089124 1782128 retry.go:31] will retry after 355.227016ms: waiting for domain to come up
	I0127 12:41:53.445558 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:53.446083 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:53.446105 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:53.446039 1782128 retry.go:31] will retry after 304.315394ms: waiting for domain to come up
	I0127 12:41:53.751658 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:53.752225 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:53.752284 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:53.752210 1782128 retry.go:31] will retry after 483.339489ms: waiting for domain to come up
	I0127 12:41:54.236868 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:54.237406 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:54.237432 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:54.237380 1782128 retry.go:31] will retry after 712.422175ms: waiting for domain to come up
	I0127 12:41:54.951209 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:54.951553 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:54.951581 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:54.951523 1782128 retry.go:31] will retry after 656.082772ms: waiting for domain to come up
	I0127 12:41:55.608712 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:55.609227 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:55.609303 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:55.609231 1782128 retry.go:31] will retry after 1.186730067s: waiting for domain to come up
	I0127 12:42:00.380790 1775552 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 12:42:00.380903 1775552 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 12:42:00.382723 1775552 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 12:42:00.382825 1775552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:42:00.382928 1775552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:42:00.383106 1775552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:42:00.383236 1775552 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 12:42:00.383341 1775552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:42:00.384612 1775552 out.go:235]   - Generating certificates and keys ...
	I0127 12:42:00.384705 1775552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:42:00.384794 1775552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:42:00.384897 1775552 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:42:00.384993 1775552 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:42:00.385096 1775552 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:42:00.385180 1775552 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:42:00.385241 1775552 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:42:00.385327 1775552 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:42:00.385449 1775552 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:42:00.385554 1775552 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:42:00.385611 1775552 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:42:00.385698 1775552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:42:00.385749 1775552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:42:00.385795 1775552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:42:00.385863 1775552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:42:00.385911 1775552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:42:00.386003 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:42:00.386092 1775552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:42:00.386165 1775552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:42:00.386259 1775552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:42:00.387546 1775552 out.go:235]   - Booting up control plane ...
	I0127 12:42:00.387659 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:42:00.387750 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:42:00.387858 1775552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:42:00.388007 1775552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:42:00.388246 1775552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 12:42:00.388314 1775552 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 12:42:00.388419 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.388584 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.388643 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.388794 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.388855 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389006 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389064 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389217 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389276 1775552 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 12:42:00.389478 1775552 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 12:42:00.389487 1775552 kubeadm.go:310] 
	I0127 12:42:00.389522 1775552 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 12:42:00.389557 1775552 kubeadm.go:310] 		timed out waiting for the condition
	I0127 12:42:00.389563 1775552 kubeadm.go:310] 
	I0127 12:42:00.389616 1775552 kubeadm.go:310] 	This error is likely caused by:
	I0127 12:42:00.389659 1775552 kubeadm.go:310] 		- The kubelet is not running
	I0127 12:42:00.389796 1775552 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 12:42:00.389805 1775552 kubeadm.go:310] 
	I0127 12:42:00.389893 1775552 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 12:42:00.389923 1775552 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 12:42:00.389951 1775552 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 12:42:00.389957 1775552 kubeadm.go:310] 
	I0127 12:42:00.390040 1775552 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 12:42:00.390111 1775552 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 12:42:00.390117 1775552 kubeadm.go:310] 
	I0127 12:42:00.390238 1775552 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 12:42:00.390344 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 12:42:00.390433 1775552 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 12:42:00.390521 1775552 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 12:42:00.390600 1775552 kubeadm.go:394] duration metric: took 8m2.1292845s to StartCluster
	I0127 12:42:00.390672 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:42:00.390740 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:42:00.390808 1775552 kubeadm.go:310] 
	I0127 12:42:00.434697 1775552 cri.go:89] found id: ""
	I0127 12:42:00.434734 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.434758 1775552 logs.go:284] No container was found matching "kube-apiserver"
	I0127 12:42:00.434768 1775552 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 12:42:00.434839 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:42:00.468250 1775552 cri.go:89] found id: ""
	I0127 12:42:00.468283 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.468296 1775552 logs.go:284] No container was found matching "etcd"
	I0127 12:42:00.468304 1775552 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 12:42:00.468379 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:42:00.501136 1775552 cri.go:89] found id: ""
	I0127 12:42:00.501171 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.501183 1775552 logs.go:284] No container was found matching "coredns"
	I0127 12:42:00.501191 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:42:00.501267 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:42:00.534246 1775552 cri.go:89] found id: ""
	I0127 12:42:00.534293 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.534305 1775552 logs.go:284] No container was found matching "kube-scheduler"
	I0127 12:42:00.534313 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:42:00.534374 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:42:00.569901 1775552 cri.go:89] found id: ""
	I0127 12:42:00.569938 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.569951 1775552 logs.go:284] No container was found matching "kube-proxy"
	I0127 12:42:00.569959 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:42:00.570023 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:41:56.797936 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:56.798392 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:56.798427 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:56.798354 1782128 retry.go:31] will retry after 1.017055693s: waiting for domain to come up
	I0127 12:41:57.817175 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:57.817782 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:57.817814 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:57.817743 1782128 retry.go:31] will retry after 1.248228359s: waiting for domain to come up
	I0127 12:41:59.067806 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:41:59.068336 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:41:59.068364 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:41:59.068316 1782128 retry.go:31] will retry after 1.405426417s: waiting for domain to come up
	I0127 12:42:00.475848 1782105 main.go:141] libmachine: (calico-956477) DBG | domain calico-956477 has defined MAC address 52:54:00:f9:4f:37 in network mk-calico-956477
	I0127 12:42:00.476491 1782105 main.go:141] libmachine: (calico-956477) DBG | unable to find current IP address of domain calico-956477 in network mk-calico-956477
	I0127 12:42:00.476524 1782105 main.go:141] libmachine: (calico-956477) DBG | I0127 12:42:00.476464 1782128 retry.go:31] will retry after 2.855737734s: waiting for domain to come up
	I0127 12:42:00.607468 1775552 cri.go:89] found id: ""
	I0127 12:42:00.607499 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.607511 1775552 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 12:42:00.607519 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 12:42:00.607584 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:42:00.640103 1775552 cri.go:89] found id: ""
	I0127 12:42:00.640143 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.640156 1775552 logs.go:284] No container was found matching "kindnet"
	I0127 12:42:00.640165 1775552 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:42:00.640241 1775552 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:42:00.673571 1775552 cri.go:89] found id: ""
	I0127 12:42:00.673610 1775552 logs.go:282] 0 containers: []
	W0127 12:42:00.673624 1775552 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 12:42:00.673640 1775552 logs.go:123] Gathering logs for dmesg ...
	I0127 12:42:00.673661 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:42:00.689315 1775552 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:42:00.689362 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 12:42:00.766937 1775552 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 12:42:00.766970 1775552 logs.go:123] Gathering logs for CRI-O ...
	I0127 12:42:00.767002 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 12:42:00.900474 1775552 logs.go:123] Gathering logs for container status ...
	I0127 12:42:00.900514 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:42:00.939400 1775552 logs.go:123] Gathering logs for kubelet ...
	I0127 12:42:00.939441 1775552 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:42:01.005217 1775552 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 12:42:01.005274 1775552 out.go:270] * 
	W0127 12:42:01.005350 1775552 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:42:01.005369 1775552 out.go:270] * 
	W0127 12:42:01.006281 1775552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:42:01.009438 1775552 out.go:201] 
	W0127 12:42:01.010472 1775552 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 12:42:01.010517 1775552 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 12:42:01.010560 1775552 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 12:42:01.011726 1775552 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.035888795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737981722035854643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f04c1f24-0e18-4801-ab48-60cbdccbf733 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.036403489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4ea7a1d-7e07-4dc2-9823-b31fadc195f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.036464960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4ea7a1d-7e07-4dc2-9823-b31fadc195f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.036496540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d4ea7a1d-7e07-4dc2-9823-b31fadc195f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.071589359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d97ad63-cb82-465f-b7e9-c315536c1cf3 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.071695467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d97ad63-cb82-465f-b7e9-c315536c1cf3 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.073199306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdc89b9c-3625-49aa-8411-db15289d7279 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.073718046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737981722073694299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdc89b9c-3625-49aa-8411-db15289d7279 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.074517116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c662cfb-508c-4ba1-9059-cb04269214cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.074606178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c662cfb-508c-4ba1-9059-cb04269214cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.074658123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c662cfb-508c-4ba1-9059-cb04269214cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.110821921Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb3da6cd-a74e-4890-9acb-268f76a21a89 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.110960549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb3da6cd-a74e-4890-9acb-268f76a21a89 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.112249540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1471b078-d6f1-4b2a-b791-63a2c7cf5866 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.112706065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737981722112685738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1471b078-d6f1-4b2a-b791-63a2c7cf5866 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.113385834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3601849-bb2e-4edb-9a09-ebc9e2ee8375 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.113483470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3601849-bb2e-4edb-9a09-ebc9e2ee8375 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.113537778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b3601849-bb2e-4edb-9a09-ebc9e2ee8375 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.149457208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=599226cb-368d-4302-9ad5-9602bfebfe6f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.149573062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=599226cb-368d-4302-9ad5-9602bfebfe6f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.151016216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a959755d-5e84-41a4-8778-12f82d07624b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.151470466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737981722151450213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a959755d-5e84-41a4-8778-12f82d07624b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.152085980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fd2aafd-e58b-4a5f-ab43-32caea70cc4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.152155783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fd2aafd-e58b-4a5f-ab43-32caea70cc4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:42:02 old-k8s-version-488586 crio[629]: time="2025-01-27 12:42:02.152204761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4fd2aafd-e58b-4a5f-ab43-32caea70cc4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.970481] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.025771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.448056] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.962494] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.061579] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076809] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.161479] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.142173] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.226136] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.196155] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820821] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Jan27 12:34] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 12:38] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan27 12:40] systemd-fstab-generator[5392]: Ignoring "noauto" option for root device
	[  +0.064559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:42:02 up 8 min,  0 users,  load average: 0.16, 0.14, 0.09
	Linux old-k8s-version-488586 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: goroutine 155 [runnable]:
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008bfa40)
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: goroutine 156 [select]:
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0006d1310, 0xc000348301, 0xc000b34c00, 0xc000b23810, 0xc000363580, 0xc000363540)
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000348360, 0x0, 0x0)
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008bfa40)
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5573]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 12:42:00 old-k8s-version-488586 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 12:42:00 old-k8s-version-488586 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 12:42:00 old-k8s-version-488586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 12:42:00 old-k8s-version-488586 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 12:42:00 old-k8s-version-488586 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5626]: I0127 12:42:00.874885    5626 server.go:416] Version: v1.20.0
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5626]: I0127 12:42:00.875225    5626 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5626]: I0127 12:42:00.877141    5626 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5626]: W0127 12:42:00.878221    5626 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 12:42:00 old-k8s-version-488586 kubelet[5626]: I0127 12:42:00.878701    5626 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (242.898268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-488586" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
I0127 12:43:17.775866 1731396 config.go:182] Loaded profile config "calico-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:44:56.261293 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:44:57.542996 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:45:00.105136 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:45:07.002401 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:45:15.468768 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:45:35.950088 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:16.911595 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:19.122461 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.128890 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.140247 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.161584 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.202952 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.284381 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.445967 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:19.768257 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:20.410422 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:21.692584 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:24.254806 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:29.376195 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:30.075824 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:36.327180 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
I0127 12:46:37.806899 1731396 config.go:182] Loaded profile config "enable-default-cni-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:46:39.617618 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:47:00.098986 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:47:38.833715 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:47:41.060760 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:11.565302 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.571716 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.583180 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.604590 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:11.645860 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.727366 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.889671 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:12.211354 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:14.134940 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:16.696582 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:21.817927 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:32.059845 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:48:52.541154 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:49:02.982336 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:49:33.502969 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:49:54.972059 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.202426 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.208859 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.220220 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.241546 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.282991 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.364579 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:55.526096 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:49:55.847630 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:56.489589 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:49:57.771345 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:00.333397 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:05.454882 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:07.002034 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:15.696514 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:22.675203 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:36.178033 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:50:55.424842 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (236.723673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-488586" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (226.731362ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-488586 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-956477 sudo iptables                       | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo docker                         | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo find                           | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo crio                           | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-956477                                     | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:48:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:48:45.061131 1790192 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:48:45.061460 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061507 1790192 out.go:358] Setting ErrFile to fd 2...
	I0127 12:48:45.061571 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061947 1790192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:48:45.062550 1790192 out.go:352] Setting JSON to false
	I0127 12:48:45.063760 1790192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34266,"bootTime":1737947859,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:48:45.063872 1790192 start.go:139] virtualization: kvm guest
	I0127 12:48:45.065969 1790192 out.go:177] * [bridge-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:48:45.067136 1790192 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:48:45.067134 1790192 notify.go:220] Checking for updates...
	I0127 12:48:45.068296 1790192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:48:45.069519 1790192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:48:45.070522 1790192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.071653 1790192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:48:45.072745 1790192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:48:45.074387 1790192 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074542 1790192 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074661 1790192 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:48:45.074797 1790192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:48:45.111354 1790192 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:48:45.112385 1790192 start.go:297] selected driver: kvm2
	I0127 12:48:45.112404 1790192 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:48:45.112417 1790192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:48:45.113111 1790192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.113192 1790192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:48:45.129191 1790192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:48:45.129247 1790192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:48:45.129509 1790192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:48:45.129542 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:48:45.129550 1790192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:48:45.129616 1790192 start.go:340] cluster config:
	{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:48:45.129762 1790192 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.131229 1790192 out.go:177] * Starting "bridge-956477" primary control-plane node in "bridge-956477" cluster
	I0127 12:48:45.132207 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:48:45.132243 1790192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:48:45.132258 1790192 cache.go:56] Caching tarball of preloaded images
	I0127 12:48:45.132337 1790192 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:48:45.132351 1790192 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:48:45.132455 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:48:45.132478 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json: {Name:mka55a4b4af7aaf9911ae593f9f5e3f84a3441e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:48:45.133024 1790192 start.go:360] acquireMachinesLock for bridge-956477: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:48:45.133083 1790192 start.go:364] duration metric: took 34.753µs to acquireMachinesLock for "bridge-956477"
	I0127 12:48:45.133110 1790192 start.go:93] Provisioning new machine with config: &{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:48:45.133187 1790192 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:48:45.134561 1790192 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:48:45.134690 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:48:45.134731 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:48:45.149509 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0127 12:48:45.150027 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:48:45.150619 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:48:45.150641 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:48:45.150972 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:48:45.151149 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:48:45.151259 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:48:45.151400 1790192 start.go:159] libmachine.API.Create for "bridge-956477" (driver="kvm2")
	I0127 12:48:45.151431 1790192 client.go:168] LocalClient.Create starting
	I0127 12:48:45.151462 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:48:45.151502 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151518 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151583 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:48:45.151607 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151621 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151653 1790192 main.go:141] libmachine: Running pre-create checks...
	I0127 12:48:45.151666 1790192 main.go:141] libmachine: (bridge-956477) Calling .PreCreateCheck
	I0127 12:48:45.152022 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:48:45.152404 1790192 main.go:141] libmachine: Creating machine...
	I0127 12:48:45.152417 1790192 main.go:141] libmachine: (bridge-956477) Calling .Create
	I0127 12:48:45.152533 1790192 main.go:141] libmachine: (bridge-956477) creating KVM machine...
	I0127 12:48:45.152554 1790192 main.go:141] libmachine: (bridge-956477) creating network...
	I0127 12:48:45.153709 1790192 main.go:141] libmachine: (bridge-956477) DBG | found existing default KVM network
	I0127 12:48:45.154981 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.154812 1790215 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:48:45.156047 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.155949 1790215 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:0f:53} reservation:<nil>}
	I0127 12:48:45.156973 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.156878 1790215 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:57:68} reservation:<nil>}
	I0127 12:48:45.158158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.158076 1790215 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039efc0}
	I0127 12:48:45.158183 1790192 main.go:141] libmachine: (bridge-956477) DBG | created network xml: 
	I0127 12:48:45.158196 1790192 main.go:141] libmachine: (bridge-956477) DBG | <network>
	I0127 12:48:45.158206 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <name>mk-bridge-956477</name>
	I0127 12:48:45.158211 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <dns enable='no'/>
	I0127 12:48:45.158215 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158222 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 12:48:45.158232 1790192 main.go:141] libmachine: (bridge-956477) DBG |     <dhcp>
	I0127 12:48:45.158241 1790192 main.go:141] libmachine: (bridge-956477) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 12:48:45.158250 1790192 main.go:141] libmachine: (bridge-956477) DBG |     </dhcp>
	I0127 12:48:45.158258 1790192 main.go:141] libmachine: (bridge-956477) DBG |   </ip>
	I0127 12:48:45.158266 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158275 1790192 main.go:141] libmachine: (bridge-956477) DBG | </network>
	I0127 12:48:45.158288 1790192 main.go:141] libmachine: (bridge-956477) DBG | 
	I0127 12:48:45.163152 1790192 main.go:141] libmachine: (bridge-956477) DBG | trying to create private KVM network mk-bridge-956477 192.168.72.0/24...
	I0127 12:48:45.234336 1790192 main.go:141] libmachine: (bridge-956477) DBG | private KVM network mk-bridge-956477 192.168.72.0/24 created
	I0127 12:48:45.234373 1790192 main.go:141] libmachine: (bridge-956477) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.234401 1790192 main.go:141] libmachine: (bridge-956477) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:48:45.234417 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.234378 1790215 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.234566 1790192 main.go:141] libmachine: (bridge-956477) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:48:45.542800 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.542627 1790215 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa...
	I0127 12:48:45.665840 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665684 1790215 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk...
	I0127 12:48:45.665878 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing magic tar header
	I0127 12:48:45.665895 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing SSH key tar header
	I0127 12:48:45.665905 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665802 1790215 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.665915 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 (perms=drwx------)
	I0127 12:48:45.665924 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477
	I0127 12:48:45.665934 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:48:45.665954 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.665963 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:48:45.665979 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:48:45.665993 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:48:45.666023 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:48:45.666045 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins
	I0127 12:48:45.666058 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:48:45.666069 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:48:45.666074 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:48:45.666085 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:45.666092 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home
	I0127 12:48:45.666099 1790192 main.go:141] libmachine: (bridge-956477) DBG | skipping /home - not owner
	I0127 12:48:45.667183 1790192 main.go:141] libmachine: (bridge-956477) define libvirt domain using xml: 
	I0127 12:48:45.667207 1790192 main.go:141] libmachine: (bridge-956477) <domain type='kvm'>
	I0127 12:48:45.667217 1790192 main.go:141] libmachine: (bridge-956477)   <name>bridge-956477</name>
	I0127 12:48:45.667225 1790192 main.go:141] libmachine: (bridge-956477)   <memory unit='MiB'>3072</memory>
	I0127 12:48:45.667233 1790192 main.go:141] libmachine: (bridge-956477)   <vcpu>2</vcpu>
	I0127 12:48:45.667241 1790192 main.go:141] libmachine: (bridge-956477)   <features>
	I0127 12:48:45.667252 1790192 main.go:141] libmachine: (bridge-956477)     <acpi/>
	I0127 12:48:45.667256 1790192 main.go:141] libmachine: (bridge-956477)     <apic/>
	I0127 12:48:45.667262 1790192 main.go:141] libmachine: (bridge-956477)     <pae/>
	I0127 12:48:45.667266 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667283 1790192 main.go:141] libmachine: (bridge-956477)   </features>
	I0127 12:48:45.667291 1790192 main.go:141] libmachine: (bridge-956477)   <cpu mode='host-passthrough'>
	I0127 12:48:45.667311 1790192 main.go:141] libmachine: (bridge-956477)   
	I0127 12:48:45.667327 1790192 main.go:141] libmachine: (bridge-956477)   </cpu>
	I0127 12:48:45.667351 1790192 main.go:141] libmachine: (bridge-956477)   <os>
	I0127 12:48:45.667372 1790192 main.go:141] libmachine: (bridge-956477)     <type>hvm</type>
	I0127 12:48:45.667389 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='cdrom'/>
	I0127 12:48:45.667405 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='hd'/>
	I0127 12:48:45.667416 1790192 main.go:141] libmachine: (bridge-956477)     <bootmenu enable='no'/>
	I0127 12:48:45.667423 1790192 main.go:141] libmachine: (bridge-956477)   </os>
	I0127 12:48:45.667433 1790192 main.go:141] libmachine: (bridge-956477)   <devices>
	I0127 12:48:45.667441 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='cdrom'>
	I0127 12:48:45.667452 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/boot2docker.iso'/>
	I0127 12:48:45.667459 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hdc' bus='scsi'/>
	I0127 12:48:45.667464 1790192 main.go:141] libmachine: (bridge-956477)       <readonly/>
	I0127 12:48:45.667470 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667480 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='disk'>
	I0127 12:48:45.667502 1790192 main.go:141] libmachine: (bridge-956477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:48:45.667514 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk'/>
	I0127 12:48:45.667519 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hda' bus='virtio'/>
	I0127 12:48:45.667527 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667531 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667537 1790192 main.go:141] libmachine: (bridge-956477)       <source network='mk-bridge-956477'/>
	I0127 12:48:45.667544 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667549 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667555 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667582 1790192 main.go:141] libmachine: (bridge-956477)       <source network='default'/>
	I0127 12:48:45.667600 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667613 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667621 1790192 main.go:141] libmachine: (bridge-956477)     <serial type='pty'>
	I0127 12:48:45.667633 1790192 main.go:141] libmachine: (bridge-956477)       <target port='0'/>
	I0127 12:48:45.667640 1790192 main.go:141] libmachine: (bridge-956477)     </serial>
	I0127 12:48:45.667651 1790192 main.go:141] libmachine: (bridge-956477)     <console type='pty'>
	I0127 12:48:45.667662 1790192 main.go:141] libmachine: (bridge-956477)       <target type='serial' port='0'/>
	I0127 12:48:45.667673 1790192 main.go:141] libmachine: (bridge-956477)     </console>
	I0127 12:48:45.667691 1790192 main.go:141] libmachine: (bridge-956477)     <rng model='virtio'>
	I0127 12:48:45.667705 1790192 main.go:141] libmachine: (bridge-956477)       <backend model='random'>/dev/random</backend>
	I0127 12:48:45.667714 1790192 main.go:141] libmachine: (bridge-956477)     </rng>
	I0127 12:48:45.667722 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667731 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667740 1790192 main.go:141] libmachine: (bridge-956477)   </devices>
	I0127 12:48:45.667749 1790192 main.go:141] libmachine: (bridge-956477) </domain>
	I0127 12:48:45.667765 1790192 main.go:141] libmachine: (bridge-956477) 
	I0127 12:48:45.672524 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:ac:62:83 in network default
	I0127 12:48:45.673006 1790192 main.go:141] libmachine: (bridge-956477) starting domain...
	I0127 12:48:45.673024 1790192 main.go:141] libmachine: (bridge-956477) ensuring networks are active...
	I0127 12:48:45.673031 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:45.673650 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network default is active
	I0127 12:48:45.673918 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network mk-bridge-956477 is active
	I0127 12:48:45.674443 1790192 main.go:141] libmachine: (bridge-956477) getting domain XML...
	I0127 12:48:45.675241 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:46.910072 1790192 main.go:141] libmachine: (bridge-956477) waiting for IP...
	I0127 12:48:46.910991 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:46.911503 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:46.911587 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:46.911518 1790215 retry.go:31] will retry after 215.854927ms: waiting for domain to come up
	I0127 12:48:47.128865 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.129422 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.129454 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.129389 1790215 retry.go:31] will retry after 345.744835ms: waiting for domain to come up
	I0127 12:48:47.476809 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.477321 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.477351 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.477304 1790215 retry.go:31] will retry after 387.587044ms: waiting for domain to come up
	I0127 12:48:47.867011 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.867519 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.867563 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.867512 1790215 retry.go:31] will retry after 564.938674ms: waiting for domain to come up
	I0127 12:48:48.434398 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:48.434970 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:48.434999 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:48.434928 1790215 retry.go:31] will retry after 628.439712ms: waiting for domain to come up
	I0127 12:48:49.064853 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.065323 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.065358 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.065288 1790215 retry.go:31] will retry after 745.70592ms: waiting for domain to come up
	I0127 12:48:49.813123 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.813748 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.813780 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.813723 1790215 retry.go:31] will retry after 1.074334161s: waiting for domain to come up
	I0127 12:48:50.889220 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:50.889785 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:50.889855 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:50.889789 1790215 retry.go:31] will retry after 1.318459201s: waiting for domain to come up
	I0127 12:48:52.210197 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:52.210618 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:52.210645 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:52.210599 1790215 retry.go:31] will retry after 1.764815725s: waiting for domain to come up
	I0127 12:48:53.976580 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:53.977130 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:53.977158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:53.977081 1790215 retry.go:31] will retry after 1.410873374s: waiting for domain to come up
	I0127 12:48:55.389480 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:55.389911 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:55.389944 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:55.389893 1790215 retry.go:31] will retry after 2.738916299s: waiting for domain to come up
	I0127 12:48:58.130207 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:58.130681 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:58.130707 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:58.130646 1790215 retry.go:31] will retry after 3.218706779s: waiting for domain to come up
	I0127 12:49:01.351430 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:01.351988 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:49:01.352019 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:49:01.351955 1790215 retry.go:31] will retry after 4.065804066s: waiting for domain to come up
	I0127 12:49:05.419663 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420108 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has current primary IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420160 1790192 main.go:141] libmachine: (bridge-956477) found domain IP: 192.168.72.28
	I0127 12:49:05.420175 1790192 main.go:141] libmachine: (bridge-956477) reserving static IP address...
	I0127 12:49:05.420595 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find host DHCP lease matching {name: "bridge-956477", mac: "52:54:00:49:99:d8", ip: "192.168.72.28"} in network mk-bridge-956477
	I0127 12:49:05.499266 1790192 main.go:141] libmachine: (bridge-956477) reserved static IP address 192.168.72.28 for domain bridge-956477
	I0127 12:49:05.499303 1790192 main.go:141] libmachine: (bridge-956477) waiting for SSH...
	I0127 12:49:05.499314 1790192 main.go:141] libmachine: (bridge-956477) DBG | Getting to WaitForSSH function...
	I0127 12:49:05.501992 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502523 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.502574 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502769 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH client type: external
	I0127 12:49:05.502798 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa (-rw-------)
	I0127 12:49:05.502836 1790192 main.go:141] libmachine: (bridge-956477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:49:05.502851 1790192 main.go:141] libmachine: (bridge-956477) DBG | About to run SSH command:
	I0127 12:49:05.502863 1790192 main.go:141] libmachine: (bridge-956477) DBG | exit 0
	I0127 12:49:05.630859 1790192 main.go:141] libmachine: (bridge-956477) DBG | SSH cmd err, output: <nil>: 
	I0127 12:49:05.631203 1790192 main.go:141] libmachine: (bridge-956477) KVM machine creation complete
	I0127 12:49:05.631537 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:05.632120 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632328 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632512 1790192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:49:05.632550 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:05.633838 1790192 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:49:05.633852 1790192 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:49:05.633858 1790192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:49:05.633864 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.635988 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636359 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.636387 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636482 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.636688 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636840 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636999 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.637148 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.637417 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.637432 1790192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:49:05.753913 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:05.753957 1790192 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:49:05.753969 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.757035 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757484 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.757521 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757749 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.757961 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758132 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758270 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.758481 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.758721 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.758739 1790192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:49:05.871011 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:49:05.871181 1790192 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:49:05.871198 1790192 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:49:05.871211 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871499 1790192 buildroot.go:166] provisioning hostname "bridge-956477"
	I0127 12:49:05.871532 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871711 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.874488 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.874941 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.874964 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.875152 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.875328 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875456 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875555 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.875684 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.875864 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.875875 1790192 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-956477 && echo "bridge-956477" | sudo tee /etc/hostname
	I0127 12:49:05.999963 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-956477
	
	I0127 12:49:06.000010 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.002594 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003041 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.003070 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003263 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.003462 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003628 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003746 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.003889 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.004099 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.004116 1790192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-956477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-956477/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-956477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:49:06.126689 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:06.126724 1790192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:49:06.126788 1790192 buildroot.go:174] setting up certificates
	I0127 12:49:06.126798 1790192 provision.go:84] configureAuth start
	I0127 12:49:06.126811 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:06.127071 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.129597 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.129936 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.129956 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.130134 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.132135 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132428 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.132453 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132601 1790192 provision.go:143] copyHostCerts
	I0127 12:49:06.132670 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:49:06.132693 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:49:06.132778 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:49:06.132883 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:49:06.132896 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:49:06.132941 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:49:06.133012 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:49:06.133023 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:49:06.133056 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:49:06.133127 1790192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.bridge-956477 san=[127.0.0.1 192.168.72.28 bridge-956477 localhost minikube]
	I0127 12:49:06.244065 1790192 provision.go:177] copyRemoteCerts
	I0127 12:49:06.244134 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:49:06.244179 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.247068 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247401 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.247439 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247543 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.247734 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.247886 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.248045 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.332164 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:49:06.355222 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:49:06.377606 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:49:06.400935 1790192 provision.go:87] duration metric: took 274.121357ms to configureAuth
	I0127 12:49:06.400966 1790192 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:49:06.401190 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:06.401304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.403876 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404282 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.404311 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404522 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.404717 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.404875 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.405024 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.405242 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.405432 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.405453 1790192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:49:06.632004 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:49:06.632052 1790192 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:49:06.632066 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetURL
	I0127 12:49:06.633455 1790192 main.go:141] libmachine: (bridge-956477) DBG | using libvirt version 6000000
	I0127 12:49:06.635940 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636296 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.636319 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636439 1790192 main.go:141] libmachine: Docker is up and running!
	I0127 12:49:06.636466 1790192 main.go:141] libmachine: Reticulating splines...
	I0127 12:49:06.636474 1790192 client.go:171] duration metric: took 21.485034654s to LocalClient.Create
	I0127 12:49:06.636493 1790192 start.go:167] duration metric: took 21.485094344s to libmachine.API.Create "bridge-956477"
	I0127 12:49:06.636508 1790192 start.go:293] postStartSetup for "bridge-956477" (driver="kvm2")
	I0127 12:49:06.636525 1790192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:49:06.636556 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.636838 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:49:06.636862 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.639069 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639386 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.639422 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639563 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.639752 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.639929 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.640062 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.724850 1790192 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:49:06.729112 1790192 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:49:06.729134 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:49:06.729192 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:49:06.729293 1790192 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:49:06.729434 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:49:06.738467 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:06.761545 1790192 start.go:296] duration metric: took 125.019791ms for postStartSetup
	I0127 12:49:06.761593 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:06.762205 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.765437 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.765808 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.765828 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.766138 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:49:06.766350 1790192 start.go:128] duration metric: took 21.63314943s to createHost
	I0127 12:49:06.766380 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.768832 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769141 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.769168 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769330 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.769547 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769745 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769899 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.770075 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.770262 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.770272 1790192 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:49:06.887120 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737982146.857755472
	
	I0127 12:49:06.887157 1790192 fix.go:216] guest clock: 1737982146.857755472
	I0127 12:49:06.887177 1790192 fix.go:229] Guest: 2025-01-27 12:49:06.857755472 +0000 UTC Remote: 2025-01-27 12:49:06.76636518 +0000 UTC m=+21.744166745 (delta=91.390292ms)
	I0127 12:49:06.887213 1790192 fix.go:200] guest clock delta is within tolerance: 91.390292ms
	I0127 12:49:06.887222 1790192 start.go:83] releasing machines lock for "bridge-956477", held for 21.754125785s
	I0127 12:49:06.887266 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.887556 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.890291 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890686 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.890715 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890834 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891309 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891479 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891572 1790192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:49:06.891614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.891715 1790192 ssh_runner.go:195] Run: cat /version.json
	I0127 12:49:06.891742 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.894127 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894492 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.894531 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894720 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894976 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895300 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.895305 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.895579 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.895614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.895836 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895831 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.896003 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.896190 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.896366 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:07.014147 1790192 ssh_runner.go:195] Run: systemctl --version
	I0127 12:49:07.020023 1790192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:49:07.181331 1790192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:49:07.186863 1790192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:49:07.186954 1790192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:49:07.203385 1790192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:49:07.203419 1790192 start.go:495] detecting cgroup driver to use...
	I0127 12:49:07.203478 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:49:07.218431 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:49:07.231459 1790192 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:49:07.231505 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:49:07.244939 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:49:07.257985 1790192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:49:07.382245 1790192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:49:07.544971 1790192 docker.go:233] disabling docker service ...
	I0127 12:49:07.545044 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:49:07.559296 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:49:07.572107 1790192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:49:07.710722 1790192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:49:07.842352 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:49:07.856902 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:49:07.873833 1790192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:49:07.873895 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.883449 1790192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:49:07.883540 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.893268 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.902934 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.913200 1790192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:49:07.923183 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.932933 1790192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.948940 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.958726 1790192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:49:07.967409 1790192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:49:07.967473 1790192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:49:07.979872 1790192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:49:07.988693 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:08.106626 1790192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:49:08.190261 1790192 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:49:08.190341 1790192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:49:08.195228 1790192 start.go:563] Will wait 60s for crictl version
	I0127 12:49:08.195312 1790192 ssh_runner.go:195] Run: which crictl
	I0127 12:49:08.198797 1790192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:49:08.237887 1790192 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:49:08.238012 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.263030 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.290320 1790192 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:49:08.291370 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:08.294322 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294643 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:08.294675 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294858 1790192 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 12:49:08.298640 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:08.311920 1790192 kubeadm.go:883] updating cluster {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:49:08.312091 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:49:08.312156 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:08.343416 1790192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:49:08.343484 1790192 ssh_runner.go:195] Run: which lz4
	I0127 12:49:08.347177 1790192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:49:08.351091 1790192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:49:08.351126 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:49:09.560777 1790192 crio.go:462] duration metric: took 1.213632525s to copy over tarball
	I0127 12:49:09.560892 1790192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:49:11.737884 1790192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176958842s)
	I0127 12:49:11.737916 1790192 crio.go:469] duration metric: took 2.177103692s to extract the tarball
	I0127 12:49:11.737927 1790192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:49:11.774005 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:11.812704 1790192 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:49:11.812729 1790192 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:49:11.812737 1790192 kubeadm.go:934] updating node { 192.168.72.28 8443 v1.32.1 crio true true} ...
	I0127 12:49:11.812874 1790192 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-956477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 12:49:11.812971 1790192 ssh_runner.go:195] Run: crio config
	I0127 12:49:11.868174 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:11.868200 1790192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:49:11.868222 1790192 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-956477 NodeName:bridge-956477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:49:11.868356 1790192 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-956477"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:49:11.868420 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:49:11.877576 1790192 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:49:11.877641 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:49:11.886156 1790192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 12:49:11.901855 1790192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:49:11.917311 1790192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:49:11.933025 1790192 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0127 12:49:11.936616 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:11.948439 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:12.060451 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:12.076612 1790192 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477 for IP: 192.168.72.28
	I0127 12:49:12.076638 1790192 certs.go:194] generating shared ca certs ...
	I0127 12:49:12.076680 1790192 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.076872 1790192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:49:12.076941 1790192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:49:12.076955 1790192 certs.go:256] generating profile certs ...
	I0127 12:49:12.077065 1790192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key
	I0127 12:49:12.077096 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt with IP's: []
	I0127 12:49:12.388180 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt ...
	I0127 12:49:12.388212 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: {Name:mk35e754849912c2ccbef7aee78a8cb664d71760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393143 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key ...
	I0127 12:49:12.393176 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key: {Name:mk1a4eb1684f2df27d8a0393e4c3ccce9e3de875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393803 1790192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9
	I0127 12:49:12.393834 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.28]
	I0127 12:49:12.504705 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 ...
	I0127 12:49:12.504741 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9: {Name:mkc470d67580d2e81bf8ee097c21f9b4e89d97ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.504924 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 ...
	I0127 12:49:12.504944 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9: {Name:mkfe8a7bf14247bc7909277acbea55dbda14424f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.505661 1790192 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt
	I0127 12:49:12.505776 1790192 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key
	I0127 12:49:12.505863 1790192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key
	I0127 12:49:12.505887 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt with IP's: []
	I0127 12:49:12.609829 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt ...
	I0127 12:49:12.609856 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt: {Name:mk6cb77c1a7b511e7130b2dd7423c6ba9c6d37ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.610644 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key ...
	I0127 12:49:12.610664 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key: {Name:mkd90fcc60d00c9236b383668f8a16c0de9554e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.614971 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:49:12.615016 1790192 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:49:12.615026 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:49:12.615065 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:49:12.615119 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:49:12.615159 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:49:12.615202 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:12.615902 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:49:12.642386 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:49:12.667109 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:49:12.688637 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:49:12.711307 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:49:12.732852 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:49:12.756599 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:49:12.812442 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:49:12.836060 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:49:12.857115 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:49:12.879108 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:49:12.900872 1790192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:49:12.917407 1790192 ssh_runner.go:195] Run: openssl version
	I0127 12:49:12.922608 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:49:12.933376 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937409 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937451 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.942881 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:49:12.953628 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:49:12.964554 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968534 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968581 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.973893 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:49:12.984546 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:49:12.994913 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998791 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998841 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:13.003870 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:49:13.013262 1790192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:49:13.016784 1790192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:49:13.016833 1790192 kubeadm.go:392] StartCluster: {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:49:13.016911 1790192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:49:13.016987 1790192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:49:13.050812 1790192 cri.go:89] found id: ""
	I0127 12:49:13.050889 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:49:13.059865 1790192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:49:13.068783 1790192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:49:13.077676 1790192 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:49:13.077698 1790192 kubeadm.go:157] found existing configuration files:
	
	I0127 12:49:13.077743 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:49:13.086826 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:49:13.086886 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:49:13.096763 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:49:13.106090 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:49:13.106152 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:49:13.115056 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.123311 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:49:13.123381 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.134697 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:49:13.145287 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:49:13.145360 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:49:13.156930 1790192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:49:13.215215 1790192 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:49:13.215384 1790192 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:49:13.321518 1790192 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:49:13.321678 1790192 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:49:13.321803 1790192 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:49:13.332363 1790192 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:49:13.473799 1790192 out.go:235]   - Generating certificates and keys ...
	I0127 12:49:13.473979 1790192 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:49:13.474081 1790192 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:49:13.685866 1790192 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:49:13.770778 1790192 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:49:14.148126 1790192 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:49:14.239549 1790192 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:49:14.286201 1790192 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:49:14.286341 1790192 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.383724 1790192 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:49:14.383950 1790192 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.501996 1790192 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:49:14.665536 1790192 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:49:14.804446 1790192 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:49:14.804529 1790192 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:49:14.897657 1790192 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:49:14.966489 1790192 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:49:15.104336 1790192 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:49:15.164491 1790192 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:49:15.350906 1790192 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:49:15.351563 1790192 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:49:15.354014 1790192 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:49:15.355551 1790192 out.go:235]   - Booting up control plane ...
	I0127 12:49:15.355691 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:49:15.355786 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:49:15.356057 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:49:15.370685 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:49:15.376916 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:49:15.377006 1790192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:49:15.515590 1790192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:49:15.515750 1790192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:49:16.516381 1790192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001998745s
	I0127 12:49:16.516512 1790192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:49:21.514222 1790192 kubeadm.go:310] [api-check] The API server is healthy after 5.001594227s
	I0127 12:49:21.532591 1790192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:49:21.554627 1790192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:49:21.596778 1790192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:49:21.597017 1790192 kubeadm.go:310] [mark-control-plane] Marking the node bridge-956477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:49:21.613382 1790192 kubeadm.go:310] [bootstrap-token] Using token: y217q3.atj9ddkanm9dqcqt
	I0127 12:49:21.614522 1790192 out.go:235]   - Configuring RBAC rules ...
	I0127 12:49:21.614665 1790192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:49:21.626049 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:49:21.635045 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:49:21.642711 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:49:21.646716 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:49:21.650577 1790192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:49:21.921382 1790192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:49:22.339910 1790192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:49:22.920294 1790192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:49:22.921302 1790192 kubeadm.go:310] 
	I0127 12:49:22.921394 1790192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:49:22.921411 1790192 kubeadm.go:310] 
	I0127 12:49:22.921499 1790192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:49:22.921508 1790192 kubeadm.go:310] 
	I0127 12:49:22.921542 1790192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:49:22.921642 1790192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:49:22.921726 1790192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:49:22.921741 1790192 kubeadm.go:310] 
	I0127 12:49:22.921806 1790192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:49:22.921817 1790192 kubeadm.go:310] 
	I0127 12:49:22.921886 1790192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:49:22.921897 1790192 kubeadm.go:310] 
	I0127 12:49:22.921961 1790192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:49:22.922086 1790192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:49:22.922181 1790192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:49:22.922191 1790192 kubeadm.go:310] 
	I0127 12:49:22.922311 1790192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:49:22.922407 1790192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:49:22.922421 1790192 kubeadm.go:310] 
	I0127 12:49:22.922529 1790192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922664 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:49:22.922701 1790192 kubeadm.go:310] 	--control-plane 
	I0127 12:49:22.922707 1790192 kubeadm.go:310] 
	I0127 12:49:22.922801 1790192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:49:22.922809 1790192 kubeadm.go:310] 
	I0127 12:49:22.922871 1790192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922996 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:49:22.923821 1790192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:49:22.924014 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:22.926262 1790192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:49:22.927449 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:49:22.937784 1790192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:49:22.955872 1790192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:49:22.955954 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:22.956000 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-956477 minikube.k8s.io/updated_at=2025_01_27T12_49_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=bridge-956477 minikube.k8s.io/primary=true
	I0127 12:49:22.984921 1790192 ops.go:34] apiserver oom_adj: -16
	I0127 12:49:23.101816 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:23.602076 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.102582 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.601942 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.102360 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.602350 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.102161 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.602794 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.102526 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.237160 1790192 kubeadm.go:1113] duration metric: took 4.281277151s to wait for elevateKubeSystemPrivileges
	I0127 12:49:27.237200 1790192 kubeadm.go:394] duration metric: took 14.220369926s to StartCluster
	I0127 12:49:27.237228 1790192 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.237320 1790192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:49:27.238783 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.239069 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:49:27.239072 1790192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:49:27.239175 1790192 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:49:27.239310 1790192 addons.go:69] Setting storage-provisioner=true in profile "bridge-956477"
	I0127 12:49:27.239320 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:27.239330 1790192 addons.go:238] Setting addon storage-provisioner=true in "bridge-956477"
	I0127 12:49:27.239333 1790192 addons.go:69] Setting default-storageclass=true in profile "bridge-956477"
	I0127 12:49:27.239365 1790192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-956477"
	I0127 12:49:27.239371 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.239830 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239873 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.239917 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239957 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.240680 1790192 out.go:177] * Verifying Kubernetes components...
	I0127 12:49:27.241931 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:27.261385 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0127 12:49:27.261452 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0127 12:49:27.261810 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262003 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262389 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262417 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262543 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262563 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262767 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262952 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262989 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.263506 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.263537 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.266688 1790192 addons.go:238] Setting addon default-storageclass=true in "bridge-956477"
	I0127 12:49:27.266732 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.267120 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.267168 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.278963 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0127 12:49:27.279421 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.279976 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.279999 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.280431 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.280692 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.282702 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0127 12:49:27.282845 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.283179 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.283627 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.283649 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.283978 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.284748 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.284785 1790192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:49:27.284797 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.285956 1790192 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.285977 1790192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:49:27.286001 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.288697 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289087 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.289110 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.289459 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.289574 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.289669 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.301672 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0127 12:49:27.302317 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.302925 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.302949 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.303263 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.303488 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.305258 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.305479 1790192 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:27.305497 1790192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:49:27.305517 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.308750 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309243 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.309269 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309409 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.309585 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.309726 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.309875 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.500640 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:27.500778 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:49:27.538353 1790192 node_ready.go:35] waiting up to 15m0s for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548400 1790192 node_ready.go:49] node "bridge-956477" has status "Ready":"True"
	I0127 12:49:27.548443 1790192 node_ready.go:38] duration metric: took 10.053639ms for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548459 1790192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:27.564271 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:27.632137 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.647091 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:28.184542 1790192 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 12:49:28.549638 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.549663 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550103 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550127 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550137 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550144 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550198 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550409 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550429 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550443 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550800 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550816 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551057 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551076 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.551081 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.551085 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.551098 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551316 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551331 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575614 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.575665 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.575924 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.575979 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575978 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.577474 1790192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:49:28.578591 1790192 addons.go:514] duration metric: took 1.33943345s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:49:28.695806 1790192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-956477" context rescaled to 1 replicas
	I0127 12:49:29.570116 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:31.570640 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:33.572383 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:34.570677 1790192 pod_ready.go:98] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.28 HostIPs:[{IP:192.168.72.
28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021ef1f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570712 1790192 pod_ready.go:82] duration metric: took 7.006412478s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	E0127 12:49:34.570726 1790192 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
2.28 HostIPs:[{IP:192.168.72.28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0021ef1f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570736 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575210 1790192 pod_ready.go:93] pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:34.575232 1790192 pod_ready.go:82] duration metric: took 4.46563ms for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575241 1790192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082910 1790192 pod_ready.go:93] pod "etcd-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.082952 1790192 pod_ready.go:82] duration metric: took 1.507702821s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082968 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086925 1790192 pod_ready.go:93] pod "kube-apiserver-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.086953 1790192 pod_ready.go:82] duration metric: took 3.975819ms for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086969 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091952 1790192 pod_ready.go:93] pod "kube-controller-manager-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.091969 1790192 pod_ready.go:82] duration metric: took 4.993389ms for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091978 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170654 1790192 pod_ready.go:93] pod "kube-proxy-8fw2n" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.170678 1790192 pod_ready.go:82] duration metric: took 78.694605ms for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170688 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.568993 1790192 pod_ready.go:93] pod "kube-scheduler-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.569019 1790192 pod_ready.go:82] duration metric: took 398.324568ms for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.569029 1790192 pod_ready.go:39] duration metric: took 9.020555356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:36.569047 1790192 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:49:36.569110 1790192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:49:36.585221 1790192 api_server.go:72] duration metric: took 9.346111182s to wait for apiserver process to appear ...
	I0127 12:49:36.585260 1790192 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:49:36.585284 1790192 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0127 12:49:36.592716 1790192 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0127 12:49:36.594292 1790192 api_server.go:141] control plane version: v1.32.1
	I0127 12:49:36.594316 1790192 api_server.go:131] duration metric: took 9.04907ms to wait for apiserver health ...
	I0127 12:49:36.594325 1790192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:49:36.771302 1790192 system_pods.go:59] 7 kube-system pods found
	I0127 12:49:36.771341 1790192 system_pods.go:61] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:36.771347 1790192 system_pods.go:61] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:36.771353 1790192 system_pods.go:61] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:36.771358 1790192 system_pods.go:61] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:36.771363 1790192 system_pods.go:61] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:36.771368 1790192 system_pods.go:61] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:36.771372 1790192 system_pods.go:61] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:36.771382 1790192 system_pods.go:74] duration metric: took 177.049643ms to wait for pod list to return data ...
	I0127 12:49:36.771394 1790192 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:49:36.969860 1790192 default_sa.go:45] found service account: "default"
	I0127 12:49:36.969891 1790192 default_sa.go:55] duration metric: took 198.486144ms for default service account to be created ...
	I0127 12:49:36.969903 1790192 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:49:37.173813 1790192 system_pods.go:87] 7 kube-system pods found
	I0127 12:49:37.370364 1790192 system_pods.go:105] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:37.370390 1790192 system_pods.go:105] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:37.370396 1790192 system_pods.go:105] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:37.370401 1790192 system_pods.go:105] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:37.370407 1790192 system_pods.go:105] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:37.370411 1790192 system_pods.go:105] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:37.370415 1790192 system_pods.go:105] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:37.370423 1790192 system_pods.go:147] duration metric: took 400.513222ms to wait for k8s-apps to be running ...
	I0127 12:49:37.370430 1790192 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:49:37.370476 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:49:37.386578 1790192 system_svc.go:56] duration metric: took 16.134406ms WaitForService to wait for kubelet
	I0127 12:49:37.386609 1790192 kubeadm.go:582] duration metric: took 10.147508217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:49:37.386628 1790192 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:49:37.570387 1790192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:49:37.570420 1790192 node_conditions.go:123] node cpu capacity is 2
	I0127 12:49:37.570439 1790192 node_conditions.go:105] duration metric: took 183.805809ms to run NodePressure ...
	I0127 12:49:37.570455 1790192 start.go:241] waiting for startup goroutines ...
	I0127 12:49:37.570466 1790192 start.go:246] waiting for cluster config update ...
	I0127 12:49:37.570478 1790192 start.go:255] writing updated cluster config ...
	I0127 12:49:37.570833 1790192 ssh_runner.go:195] Run: rm -f paused
	I0127 12:49:37.621383 1790192 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:49:37.623996 1790192 out.go:177] * Done! kubectl is now configured to use "bridge-956477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.452425438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982263452404505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8068234-31f2-4c4f-be4f-e6ea08c03383 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.452887562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8ce4c40-4c1b-467f-a9f3-1e34a8ddf843 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.453009089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8ce4c40-4c1b-467f-a9f3-1e34a8ddf843 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.453084268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a8ce4c40-4c1b-467f-a9f3-1e34a8ddf843 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.482242716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=496ca214-6e38-4a9f-9ef6-9310ac1a8313 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.482327080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=496ca214-6e38-4a9f-9ef6-9310ac1a8313 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.483356425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96b7248c-dbc0-44fa-9c2a-ff9c52e73b67 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.483737832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982263483716376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96b7248c-dbc0-44fa-9c2a-ff9c52e73b67 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.484274207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3da06954-46de-4e1e-9d4f-6837b1915e18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.484339249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3da06954-46de-4e1e-9d4f-6837b1915e18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.484386534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3da06954-46de-4e1e-9d4f-6837b1915e18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.514681998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f51f39a-d73c-49aa-a24f-6c71a0157cd5 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.514807492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f51f39a-d73c-49aa-a24f-6c71a0157cd5 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.516338828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9658ab5c-637b-47c5-ba96-f754806de108 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.516878005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982263516845678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9658ab5c-637b-47c5-ba96-f754806de108 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.517489556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eb9b104-5f14-4a1f-b764-c908af4e4dbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.517575556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eb9b104-5f14-4a1f-b764-c908af4e4dbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.517609278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9eb9b104-5f14-4a1f-b764-c908af4e4dbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.548146536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf3eeed4-d365-4948-acd0-f7bbce306902 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.548238289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf3eeed4-d365-4948-acd0-f7bbce306902 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.549606549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b688a22e-78bf-4826-bc5a-87008e36d3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.550041245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982263550015713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b688a22e-78bf-4826-bc5a-87008e36d3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.550549582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d634d224-3693-4006-919e-3ec59b852a59 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.550593062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d634d224-3693-4006-919e-3ec59b852a59 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:51:03 old-k8s-version-488586 crio[629]: time="2025-01-27 12:51:03.550627527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d634d224-3693-4006-919e-3ec59b852a59 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.970481] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.025771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.448056] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.962494] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.061579] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076809] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.161479] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.142173] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.226136] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.196155] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820821] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Jan27 12:34] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 12:38] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan27 12:40] systemd-fstab-generator[5392]: Ignoring "noauto" option for root device
	[  +0.064559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:51:03 up 17 min,  0 users,  load average: 0.00, 0.05, 0.07
	Linux old-k8s-version-488586 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000a96e10)
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: goroutine 162 [select]:
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b99ef0, 0x4f0ac20, 0xc000aa2640, 0x1, 0xc0001020c0)
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000257180, 0xc0001020c0)
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a6c790, 0xc00098f6a0)
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 12:51:01 old-k8s-version-488586 kubelet[6557]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 12:51:01 old-k8s-version-488586 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 12:51:01 old-k8s-version-488586 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 12:51:02 old-k8s-version-488586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 27 12:51:02 old-k8s-version-488586 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 12:51:02 old-k8s-version-488586 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 12:51:02 old-k8s-version-488586 kubelet[6565]: I0127 12:51:02.395481    6565 server.go:416] Version: v1.20.0
	Jan 27 12:51:02 old-k8s-version-488586 kubelet[6565]: I0127 12:51:02.395946    6565 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 12:51:02 old-k8s-version-488586 kubelet[6565]: I0127 12:51:02.397975    6565 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 12:51:02 old-k8s-version-488586 kubelet[6565]: I0127 12:51:02.399000    6565 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 27 12:51:02 old-k8s-version-488586 kubelet[6565]: W0127 12:51:02.399027    6565 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (227.210967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-488586" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:17.139743 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:19.122449 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:36.326236 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:37.993112 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:37.999469 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:38.010819 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:38.032170 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:38.074277 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:38.155742 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:38.317310 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:38.639034 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:39.280914 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:40.563214 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:43.124764 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:46.824428 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:48.246980 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:51:58.488987 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:52:18.970888 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:52:39.061778 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:52:59.932868 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:11.565231 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:12.670423 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.676820 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.688165 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.709522 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.750957 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.832387 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:12.993937 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:13.315276 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:13.957406 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:15.239564 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:17.801345 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:22.922708 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:33.164233 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:39.266574 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:53:53.646257 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:21.854389 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:34.608259 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:38.060018 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.066421 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.077788 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.099173 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.140622 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.222076 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.383640 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:38.705404 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:39.347325 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:39.401732 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:40.628962 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:43.190484 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:48.311983 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:54.972560 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:55.201886 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:54:58.553991 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:55:07.001911 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:55:19.035288 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:55:22.903836 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/custom-flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:55:56.529790 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/flannel-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:55:59.997231 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:56:19.122391 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/kindnet-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:56:36.326948 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
E0127 12:56:37.992891 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.109:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.109:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (241.265101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-488586" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-488586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-488586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.176µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-488586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
E0127 12:57:05.696634 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/enable-default-cni-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (231.741505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-488586 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-956477 sudo iptables                       | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:49 UTC | 27 Jan 25 12:49 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo docker                         | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo cat                            | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo                                | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo find                           | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-956477 sudo crio                           | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-956477                                     | bridge-956477 | jenkins | v1.35.0 | 27 Jan 25 12:50 UTC | 27 Jan 25 12:50 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:48:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:48:45.061131 1790192 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:48:45.061460 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061507 1790192 out.go:358] Setting ErrFile to fd 2...
	I0127 12:48:45.061571 1790192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:48:45.061947 1790192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:48:45.062550 1790192 out.go:352] Setting JSON to false
	I0127 12:48:45.063760 1790192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34266,"bootTime":1737947859,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:48:45.063872 1790192 start.go:139] virtualization: kvm guest
	I0127 12:48:45.065969 1790192 out.go:177] * [bridge-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:48:45.067136 1790192 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:48:45.067134 1790192 notify.go:220] Checking for updates...
	I0127 12:48:45.068296 1790192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:48:45.069519 1790192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:48:45.070522 1790192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.071653 1790192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:48:45.072745 1790192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:48:45.074387 1790192 config.go:182] Loaded profile config "default-k8s-diff-port-485564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074542 1790192 config.go:182] Loaded profile config "no-preload-472479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:48:45.074661 1790192 config.go:182] Loaded profile config "old-k8s-version-488586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:48:45.074797 1790192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:48:45.111354 1790192 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:48:45.112385 1790192 start.go:297] selected driver: kvm2
	I0127 12:48:45.112404 1790192 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:48:45.112417 1790192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:48:45.113111 1790192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.113192 1790192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:48:45.129191 1790192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:48:45.129247 1790192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:48:45.129509 1790192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:48:45.129542 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:48:45.129550 1790192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:48:45.129616 1790192 start.go:340] cluster config:
	{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:48:45.129762 1790192 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:48:45.131229 1790192 out.go:177] * Starting "bridge-956477" primary control-plane node in "bridge-956477" cluster
	I0127 12:48:45.132207 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:48:45.132243 1790192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:48:45.132258 1790192 cache.go:56] Caching tarball of preloaded images
	I0127 12:48:45.132337 1790192 preload.go:172] Found /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:48:45.132351 1790192 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:48:45.132455 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:48:45.132478 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json: {Name:mka55a4b4af7aaf9911ae593f9f5e3f84a3441e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:48:45.133024 1790192 start.go:360] acquireMachinesLock for bridge-956477: {Name:mk206ddd5e564ea6986d65bef76be5837a8b5360 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:48:45.133083 1790192 start.go:364] duration metric: took 34.753µs to acquireMachinesLock for "bridge-956477"
	I0127 12:48:45.133110 1790192 start.go:93] Provisioning new machine with config: &{Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:48:45.133187 1790192 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:48:45.134561 1790192 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:48:45.134690 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:48:45.134731 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:48:45.149509 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0127 12:48:45.150027 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:48:45.150619 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:48:45.150641 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:48:45.150972 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:48:45.151149 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:48:45.151259 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:48:45.151400 1790192 start.go:159] libmachine.API.Create for "bridge-956477" (driver="kvm2")
	I0127 12:48:45.151431 1790192 client.go:168] LocalClient.Create starting
	I0127 12:48:45.151462 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem
	I0127 12:48:45.151502 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151518 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151583 1790192 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem
	I0127 12:48:45.151607 1790192 main.go:141] libmachine: Decoding PEM data...
	I0127 12:48:45.151621 1790192 main.go:141] libmachine: Parsing certificate...
	I0127 12:48:45.151653 1790192 main.go:141] libmachine: Running pre-create checks...
	I0127 12:48:45.151666 1790192 main.go:141] libmachine: (bridge-956477) Calling .PreCreateCheck
	I0127 12:48:45.152022 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:48:45.152404 1790192 main.go:141] libmachine: Creating machine...
	I0127 12:48:45.152417 1790192 main.go:141] libmachine: (bridge-956477) Calling .Create
	I0127 12:48:45.152533 1790192 main.go:141] libmachine: (bridge-956477) creating KVM machine...
	I0127 12:48:45.152554 1790192 main.go:141] libmachine: (bridge-956477) creating network...
	I0127 12:48:45.153709 1790192 main.go:141] libmachine: (bridge-956477) DBG | found existing default KVM network
	I0127 12:48:45.154981 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.154812 1790215 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:89:36} reservation:<nil>}
	I0127 12:48:45.156047 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.155949 1790215 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:0f:53} reservation:<nil>}
	I0127 12:48:45.156973 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.156878 1790215 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:57:68} reservation:<nil>}
	I0127 12:48:45.158158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.158076 1790215 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039efc0}
	I0127 12:48:45.158183 1790192 main.go:141] libmachine: (bridge-956477) DBG | created network xml: 
	I0127 12:48:45.158196 1790192 main.go:141] libmachine: (bridge-956477) DBG | <network>
	I0127 12:48:45.158206 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <name>mk-bridge-956477</name>
	I0127 12:48:45.158211 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <dns enable='no'/>
	I0127 12:48:45.158215 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158222 1790192 main.go:141] libmachine: (bridge-956477) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 12:48:45.158232 1790192 main.go:141] libmachine: (bridge-956477) DBG |     <dhcp>
	I0127 12:48:45.158241 1790192 main.go:141] libmachine: (bridge-956477) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 12:48:45.158250 1790192 main.go:141] libmachine: (bridge-956477) DBG |     </dhcp>
	I0127 12:48:45.158258 1790192 main.go:141] libmachine: (bridge-956477) DBG |   </ip>
	I0127 12:48:45.158266 1790192 main.go:141] libmachine: (bridge-956477) DBG |   
	I0127 12:48:45.158275 1790192 main.go:141] libmachine: (bridge-956477) DBG | </network>
	I0127 12:48:45.158288 1790192 main.go:141] libmachine: (bridge-956477) DBG | 
	I0127 12:48:45.163152 1790192 main.go:141] libmachine: (bridge-956477) DBG | trying to create private KVM network mk-bridge-956477 192.168.72.0/24...
	I0127 12:48:45.234336 1790192 main.go:141] libmachine: (bridge-956477) DBG | private KVM network mk-bridge-956477 192.168.72.0/24 created
	I0127 12:48:45.234373 1790192 main.go:141] libmachine: (bridge-956477) setting up store path in /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.234401 1790192 main.go:141] libmachine: (bridge-956477) building disk image from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:48:45.234417 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.234378 1790215 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.234566 1790192 main.go:141] libmachine: (bridge-956477) Downloading /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:48:45.542800 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.542627 1790215 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa...
	I0127 12:48:45.665840 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665684 1790215 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk...
	I0127 12:48:45.665878 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing magic tar header
	I0127 12:48:45.665895 1790192 main.go:141] libmachine: (bridge-956477) DBG | Writing SSH key tar header
	I0127 12:48:45.665905 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:45.665802 1790215 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 ...
	I0127 12:48:45.665915 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477 (perms=drwx------)
	I0127 12:48:45.665924 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477
	I0127 12:48:45.665934 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines
	I0127 12:48:45.665954 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:48:45.665963 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20318-1724227
	I0127 12:48:45.665979 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:48:45.665993 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227/.minikube (perms=drwxr-xr-x)
	I0127 12:48:45.666023 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:48:45.666045 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home/jenkins
	I0127 12:48:45.666058 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration/20318-1724227 (perms=drwxrwxr-x)
	I0127 12:48:45.666069 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:48:45.666074 1790192 main.go:141] libmachine: (bridge-956477) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:48:45.666085 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:45.666092 1790192 main.go:141] libmachine: (bridge-956477) DBG | checking permissions on dir: /home
	I0127 12:48:45.666099 1790192 main.go:141] libmachine: (bridge-956477) DBG | skipping /home - not owner
	I0127 12:48:45.667183 1790192 main.go:141] libmachine: (bridge-956477) define libvirt domain using xml: 
	I0127 12:48:45.667207 1790192 main.go:141] libmachine: (bridge-956477) <domain type='kvm'>
	I0127 12:48:45.667217 1790192 main.go:141] libmachine: (bridge-956477)   <name>bridge-956477</name>
	I0127 12:48:45.667225 1790192 main.go:141] libmachine: (bridge-956477)   <memory unit='MiB'>3072</memory>
	I0127 12:48:45.667233 1790192 main.go:141] libmachine: (bridge-956477)   <vcpu>2</vcpu>
	I0127 12:48:45.667241 1790192 main.go:141] libmachine: (bridge-956477)   <features>
	I0127 12:48:45.667252 1790192 main.go:141] libmachine: (bridge-956477)     <acpi/>
	I0127 12:48:45.667256 1790192 main.go:141] libmachine: (bridge-956477)     <apic/>
	I0127 12:48:45.667262 1790192 main.go:141] libmachine: (bridge-956477)     <pae/>
	I0127 12:48:45.667266 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667283 1790192 main.go:141] libmachine: (bridge-956477)   </features>
	I0127 12:48:45.667291 1790192 main.go:141] libmachine: (bridge-956477)   <cpu mode='host-passthrough'>
	I0127 12:48:45.667311 1790192 main.go:141] libmachine: (bridge-956477)   
	I0127 12:48:45.667327 1790192 main.go:141] libmachine: (bridge-956477)   </cpu>
	I0127 12:48:45.667351 1790192 main.go:141] libmachine: (bridge-956477)   <os>
	I0127 12:48:45.667372 1790192 main.go:141] libmachine: (bridge-956477)     <type>hvm</type>
	I0127 12:48:45.667389 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='cdrom'/>
	I0127 12:48:45.667405 1790192 main.go:141] libmachine: (bridge-956477)     <boot dev='hd'/>
	I0127 12:48:45.667416 1790192 main.go:141] libmachine: (bridge-956477)     <bootmenu enable='no'/>
	I0127 12:48:45.667423 1790192 main.go:141] libmachine: (bridge-956477)   </os>
	I0127 12:48:45.667433 1790192 main.go:141] libmachine: (bridge-956477)   <devices>
	I0127 12:48:45.667441 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='cdrom'>
	I0127 12:48:45.667452 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/boot2docker.iso'/>
	I0127 12:48:45.667459 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hdc' bus='scsi'/>
	I0127 12:48:45.667464 1790192 main.go:141] libmachine: (bridge-956477)       <readonly/>
	I0127 12:48:45.667470 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667480 1790192 main.go:141] libmachine: (bridge-956477)     <disk type='file' device='disk'>
	I0127 12:48:45.667502 1790192 main.go:141] libmachine: (bridge-956477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:48:45.667514 1790192 main.go:141] libmachine: (bridge-956477)       <source file='/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/bridge-956477.rawdisk'/>
	I0127 12:48:45.667519 1790192 main.go:141] libmachine: (bridge-956477)       <target dev='hda' bus='virtio'/>
	I0127 12:48:45.667527 1790192 main.go:141] libmachine: (bridge-956477)     </disk>
	I0127 12:48:45.667531 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667537 1790192 main.go:141] libmachine: (bridge-956477)       <source network='mk-bridge-956477'/>
	I0127 12:48:45.667544 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667549 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667555 1790192 main.go:141] libmachine: (bridge-956477)     <interface type='network'>
	I0127 12:48:45.667582 1790192 main.go:141] libmachine: (bridge-956477)       <source network='default'/>
	I0127 12:48:45.667600 1790192 main.go:141] libmachine: (bridge-956477)       <model type='virtio'/>
	I0127 12:48:45.667613 1790192 main.go:141] libmachine: (bridge-956477)     </interface>
	I0127 12:48:45.667621 1790192 main.go:141] libmachine: (bridge-956477)     <serial type='pty'>
	I0127 12:48:45.667633 1790192 main.go:141] libmachine: (bridge-956477)       <target port='0'/>
	I0127 12:48:45.667640 1790192 main.go:141] libmachine: (bridge-956477)     </serial>
	I0127 12:48:45.667651 1790192 main.go:141] libmachine: (bridge-956477)     <console type='pty'>
	I0127 12:48:45.667662 1790192 main.go:141] libmachine: (bridge-956477)       <target type='serial' port='0'/>
	I0127 12:48:45.667673 1790192 main.go:141] libmachine: (bridge-956477)     </console>
	I0127 12:48:45.667691 1790192 main.go:141] libmachine: (bridge-956477)     <rng model='virtio'>
	I0127 12:48:45.667705 1790192 main.go:141] libmachine: (bridge-956477)       <backend model='random'>/dev/random</backend>
	I0127 12:48:45.667714 1790192 main.go:141] libmachine: (bridge-956477)     </rng>
	I0127 12:48:45.667722 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667731 1790192 main.go:141] libmachine: (bridge-956477)     
	I0127 12:48:45.667740 1790192 main.go:141] libmachine: (bridge-956477)   </devices>
	I0127 12:48:45.667749 1790192 main.go:141] libmachine: (bridge-956477) </domain>
	I0127 12:48:45.667765 1790192 main.go:141] libmachine: (bridge-956477) 
	I0127 12:48:45.672524 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:ac:62:83 in network default
	I0127 12:48:45.673006 1790192 main.go:141] libmachine: (bridge-956477) starting domain...
	I0127 12:48:45.673024 1790192 main.go:141] libmachine: (bridge-956477) ensuring networks are active...
	I0127 12:48:45.673031 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:45.673650 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network default is active
	I0127 12:48:45.673918 1790192 main.go:141] libmachine: (bridge-956477) Ensuring network mk-bridge-956477 is active
	I0127 12:48:45.674443 1790192 main.go:141] libmachine: (bridge-956477) getting domain XML...
	I0127 12:48:45.675241 1790192 main.go:141] libmachine: (bridge-956477) creating domain...
	I0127 12:48:46.910072 1790192 main.go:141] libmachine: (bridge-956477) waiting for IP...
	I0127 12:48:46.910991 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:46.911503 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:46.911587 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:46.911518 1790215 retry.go:31] will retry after 215.854927ms: waiting for domain to come up
	I0127 12:48:47.128865 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.129422 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.129454 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.129389 1790215 retry.go:31] will retry after 345.744835ms: waiting for domain to come up
	I0127 12:48:47.476809 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.477321 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.477351 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.477304 1790215 retry.go:31] will retry after 387.587044ms: waiting for domain to come up
	I0127 12:48:47.867011 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:47.867519 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:47.867563 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:47.867512 1790215 retry.go:31] will retry after 564.938674ms: waiting for domain to come up
	I0127 12:48:48.434398 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:48.434970 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:48.434999 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:48.434928 1790215 retry.go:31] will retry after 628.439712ms: waiting for domain to come up
	I0127 12:48:49.064853 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.065323 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.065358 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.065288 1790215 retry.go:31] will retry after 745.70592ms: waiting for domain to come up
	I0127 12:48:49.813123 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:49.813748 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:49.813780 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:49.813723 1790215 retry.go:31] will retry after 1.074334161s: waiting for domain to come up
	I0127 12:48:50.889220 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:50.889785 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:50.889855 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:50.889789 1790215 retry.go:31] will retry after 1.318459201s: waiting for domain to come up
	I0127 12:48:52.210197 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:52.210618 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:52.210645 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:52.210599 1790215 retry.go:31] will retry after 1.764815725s: waiting for domain to come up
	I0127 12:48:53.976580 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:53.977130 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:53.977158 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:53.977081 1790215 retry.go:31] will retry after 1.410873374s: waiting for domain to come up
	I0127 12:48:55.389480 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:55.389911 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:55.389944 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:55.389893 1790215 retry.go:31] will retry after 2.738916299s: waiting for domain to come up
	I0127 12:48:58.130207 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:48:58.130681 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:48:58.130707 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:48:58.130646 1790215 retry.go:31] will retry after 3.218706779s: waiting for domain to come up
	I0127 12:49:01.351430 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:01.351988 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find current IP address of domain bridge-956477 in network mk-bridge-956477
	I0127 12:49:01.352019 1790192 main.go:141] libmachine: (bridge-956477) DBG | I0127 12:49:01.351955 1790215 retry.go:31] will retry after 4.065804066s: waiting for domain to come up
	I0127 12:49:05.419663 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420108 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has current primary IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.420160 1790192 main.go:141] libmachine: (bridge-956477) found domain IP: 192.168.72.28
	I0127 12:49:05.420175 1790192 main.go:141] libmachine: (bridge-956477) reserving static IP address...
	I0127 12:49:05.420595 1790192 main.go:141] libmachine: (bridge-956477) DBG | unable to find host DHCP lease matching {name: "bridge-956477", mac: "52:54:00:49:99:d8", ip: "192.168.72.28"} in network mk-bridge-956477
	I0127 12:49:05.499266 1790192 main.go:141] libmachine: (bridge-956477) reserved static IP address 192.168.72.28 for domain bridge-956477
	I0127 12:49:05.499303 1790192 main.go:141] libmachine: (bridge-956477) waiting for SSH...
	I0127 12:49:05.499314 1790192 main.go:141] libmachine: (bridge-956477) DBG | Getting to WaitForSSH function...
	I0127 12:49:05.501992 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502523 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.502574 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.502769 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH client type: external
	I0127 12:49:05.502798 1790192 main.go:141] libmachine: (bridge-956477) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa (-rw-------)
	I0127 12:49:05.502836 1790192 main.go:141] libmachine: (bridge-956477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:49:05.502851 1790192 main.go:141] libmachine: (bridge-956477) DBG | About to run SSH command:
	I0127 12:49:05.502863 1790192 main.go:141] libmachine: (bridge-956477) DBG | exit 0
	I0127 12:49:05.630859 1790192 main.go:141] libmachine: (bridge-956477) DBG | SSH cmd err, output: <nil>: 
	I0127 12:49:05.631203 1790192 main.go:141] libmachine: (bridge-956477) KVM machine creation complete
	I0127 12:49:05.631537 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:05.632120 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632328 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:05.632512 1790192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:49:05.632550 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:05.633838 1790192 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:49:05.633852 1790192 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:49:05.633858 1790192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:49:05.633864 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.635988 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636359 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.636387 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.636482 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.636688 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636840 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.636999 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.637148 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.637417 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.637432 1790192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:49:05.753913 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:05.753957 1790192 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:49:05.753969 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.757035 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757484 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.757521 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.757749 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.757961 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758132 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.758270 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.758481 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.758721 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.758739 1790192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:49:05.871011 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:49:05.871181 1790192 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:49:05.871198 1790192 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:49:05.871211 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871499 1790192 buildroot.go:166] provisioning hostname "bridge-956477"
	I0127 12:49:05.871532 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:05.871711 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:05.874488 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.874941 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:05.874964 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:05.875152 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:05.875328 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875456 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:05.875555 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:05.875684 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:05.875864 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:05.875875 1790192 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-956477 && echo "bridge-956477" | sudo tee /etc/hostname
	I0127 12:49:05.999963 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-956477
	
	I0127 12:49:06.000010 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.002594 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003041 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.003070 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.003263 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.003462 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003628 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.003746 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.003889 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.004099 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.004116 1790192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-956477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-956477/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-956477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:49:06.126689 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:49:06.126724 1790192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-1724227/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-1724227/.minikube}
	I0127 12:49:06.126788 1790192 buildroot.go:174] setting up certificates
	I0127 12:49:06.126798 1790192 provision.go:84] configureAuth start
	I0127 12:49:06.126811 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetMachineName
	I0127 12:49:06.127071 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.129597 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.129936 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.129956 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.130134 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.132135 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132428 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.132453 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.132601 1790192 provision.go:143] copyHostCerts
	I0127 12:49:06.132670 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem, removing ...
	I0127 12:49:06.132693 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem
	I0127 12:49:06.132778 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.pem (1078 bytes)
	I0127 12:49:06.132883 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem, removing ...
	I0127 12:49:06.132896 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem
	I0127 12:49:06.132941 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/cert.pem (1123 bytes)
	I0127 12:49:06.133012 1790192 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem, removing ...
	I0127 12:49:06.133023 1790192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem
	I0127 12:49:06.133056 1790192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-1724227/.minikube/key.pem (1675 bytes)
	I0127 12:49:06.133127 1790192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem org=jenkins.bridge-956477 san=[127.0.0.1 192.168.72.28 bridge-956477 localhost minikube]
	I0127 12:49:06.244065 1790192 provision.go:177] copyRemoteCerts
	I0127 12:49:06.244134 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:49:06.244179 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.247068 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247401 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.247439 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.247543 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.247734 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.247886 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.248045 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.332164 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:49:06.355222 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:49:06.377606 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:49:06.400935 1790192 provision.go:87] duration metric: took 274.121357ms to configureAuth
	I0127 12:49:06.400966 1790192 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:49:06.401190 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:06.401304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.403876 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404282 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.404311 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.404522 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.404717 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.404875 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.405024 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.405242 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.405432 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.405453 1790192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:49:06.632004 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:49:06.632052 1790192 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:49:06.632066 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetURL
	I0127 12:49:06.633455 1790192 main.go:141] libmachine: (bridge-956477) DBG | using libvirt version 6000000
	I0127 12:49:06.635940 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636296 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.636319 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.636439 1790192 main.go:141] libmachine: Docker is up and running!
	I0127 12:49:06.636466 1790192 main.go:141] libmachine: Reticulating splines...
	I0127 12:49:06.636474 1790192 client.go:171] duration metric: took 21.485034654s to LocalClient.Create
	I0127 12:49:06.636493 1790192 start.go:167] duration metric: took 21.485094344s to libmachine.API.Create "bridge-956477"
	I0127 12:49:06.636508 1790192 start.go:293] postStartSetup for "bridge-956477" (driver="kvm2")
	I0127 12:49:06.636525 1790192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:49:06.636556 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.636838 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:49:06.636862 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.639069 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639386 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.639422 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.639563 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.639752 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.639929 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.640062 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.724850 1790192 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:49:06.729112 1790192 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:49:06.729134 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/addons for local assets ...
	I0127 12:49:06.729192 1790192 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-1724227/.minikube/files for local assets ...
	I0127 12:49:06.729293 1790192 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem -> 17313962.pem in /etc/ssl/certs
	I0127 12:49:06.729434 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:49:06.738467 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:06.761545 1790192 start.go:296] duration metric: took 125.019791ms for postStartSetup
	I0127 12:49:06.761593 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetConfigRaw
	I0127 12:49:06.762205 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.765437 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.765808 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.765828 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.766138 1790192 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/config.json ...
	I0127 12:49:06.766350 1790192 start.go:128] duration metric: took 21.63314943s to createHost
	I0127 12:49:06.766380 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.768832 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769141 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.769168 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.769330 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.769547 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769745 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.769899 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.770075 1790192 main.go:141] libmachine: Using SSH client type: native
	I0127 12:49:06.770262 1790192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0127 12:49:06.770272 1790192 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:49:06.887120 1790192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737982146.857755472
	
	I0127 12:49:06.887157 1790192 fix.go:216] guest clock: 1737982146.857755472
	I0127 12:49:06.887177 1790192 fix.go:229] Guest: 2025-01-27 12:49:06.857755472 +0000 UTC Remote: 2025-01-27 12:49:06.76636518 +0000 UTC m=+21.744166745 (delta=91.390292ms)
	I0127 12:49:06.887213 1790192 fix.go:200] guest clock delta is within tolerance: 91.390292ms
	I0127 12:49:06.887222 1790192 start.go:83] releasing machines lock for "bridge-956477", held for 21.754125785s
	I0127 12:49:06.887266 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.887556 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:06.890291 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890686 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.890715 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.890834 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891309 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891479 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:06.891572 1790192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:49:06.891614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.891715 1790192 ssh_runner.go:195] Run: cat /version.json
	I0127 12:49:06.891742 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:06.894127 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894492 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.894531 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894720 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.894976 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895300 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.895305 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:06.895579 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:06.895614 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.895836 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:06.895831 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:06.896003 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:06.896190 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:06.896366 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:07.014147 1790192 ssh_runner.go:195] Run: systemctl --version
	I0127 12:49:07.020023 1790192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:49:07.181331 1790192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:49:07.186863 1790192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:49:07.186954 1790192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:49:07.203385 1790192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:49:07.203419 1790192 start.go:495] detecting cgroup driver to use...
	I0127 12:49:07.203478 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:49:07.218431 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:49:07.231459 1790192 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:49:07.231505 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:49:07.244939 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:49:07.257985 1790192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:49:07.382245 1790192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:49:07.544971 1790192 docker.go:233] disabling docker service ...
	I0127 12:49:07.545044 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:49:07.559296 1790192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:49:07.572107 1790192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:49:07.710722 1790192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:49:07.842352 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:49:07.856902 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:49:07.873833 1790192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:49:07.873895 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.883449 1790192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:49:07.883540 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.893268 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.902934 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.913200 1790192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:49:07.923183 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.932933 1790192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.948940 1790192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:49:07.958726 1790192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:49:07.967409 1790192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:49:07.967473 1790192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:49:07.979872 1790192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:49:07.988693 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:08.106626 1790192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:49:08.190261 1790192 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:49:08.190341 1790192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:49:08.195228 1790192 start.go:563] Will wait 60s for crictl version
	I0127 12:49:08.195312 1790192 ssh_runner.go:195] Run: which crictl
	I0127 12:49:08.198797 1790192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:49:08.237887 1790192 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:49:08.238012 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.263030 1790192 ssh_runner.go:195] Run: crio --version
	I0127 12:49:08.290320 1790192 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:49:08.291370 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetIP
	I0127 12:49:08.294322 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294643 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:08.294675 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:08.294858 1790192 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 12:49:08.298640 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:08.311920 1790192 kubeadm.go:883] updating cluster {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:49:08.312091 1790192 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:49:08.312156 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:08.343416 1790192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:49:08.343484 1790192 ssh_runner.go:195] Run: which lz4
	I0127 12:49:08.347177 1790192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:49:08.351091 1790192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:49:08.351126 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:49:09.560777 1790192 crio.go:462] duration metric: took 1.213632525s to copy over tarball
	I0127 12:49:09.560892 1790192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:49:11.737884 1790192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176958842s)
	I0127 12:49:11.737916 1790192 crio.go:469] duration metric: took 2.177103692s to extract the tarball
	I0127 12:49:11.737927 1790192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:49:11.774005 1790192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:49:11.812704 1790192 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:49:11.812729 1790192 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:49:11.812737 1790192 kubeadm.go:934] updating node { 192.168.72.28 8443 v1.32.1 crio true true} ...
	I0127 12:49:11.812874 1790192 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-956477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 12:49:11.812971 1790192 ssh_runner.go:195] Run: crio config
	I0127 12:49:11.868174 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:11.868200 1790192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:49:11.868222 1790192 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-956477 NodeName:bridge-956477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:49:11.868356 1790192 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-956477"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:49:11.868420 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:49:11.877576 1790192 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:49:11.877641 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:49:11.886156 1790192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 12:49:11.901855 1790192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:49:11.917311 1790192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:49:11.933025 1790192 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0127 12:49:11.936616 1790192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:49:11.948439 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:12.060451 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:12.076612 1790192 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477 for IP: 192.168.72.28
	I0127 12:49:12.076638 1790192 certs.go:194] generating shared ca certs ...
	I0127 12:49:12.076680 1790192 certs.go:226] acquiring lock for ca certs: {Name:mk9e5c59e9dbe250ccc1601895509e2d4f3690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.076872 1790192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key
	I0127 12:49:12.076941 1790192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key
	I0127 12:49:12.076955 1790192 certs.go:256] generating profile certs ...
	I0127 12:49:12.077065 1790192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key
	I0127 12:49:12.077096 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt with IP's: []
	I0127 12:49:12.388180 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt ...
	I0127 12:49:12.388212 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.crt: {Name:mk35e754849912c2ccbef7aee78a8cb664d71760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393143 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key ...
	I0127 12:49:12.393176 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/client.key: {Name:mk1a4eb1684f2df27d8a0393e4c3ccce9e3de875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.393803 1790192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9
	I0127 12:49:12.393834 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.28]
	I0127 12:49:12.504705 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 ...
	I0127 12:49:12.504741 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9: {Name:mkc470d67580d2e81bf8ee097c21f9b4e89d97ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.504924 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 ...
	I0127 12:49:12.504944 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9: {Name:mkfe8a7bf14247bc7909277acbea55dbda14424f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.505661 1790192 certs.go:381] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt
	I0127 12:49:12.505776 1790192 certs.go:385] copying /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key.754e3ec9 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key
	I0127 12:49:12.505863 1790192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key
	I0127 12:49:12.505887 1790192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt with IP's: []
	I0127 12:49:12.609829 1790192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt ...
	I0127 12:49:12.609856 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt: {Name:mk6cb77c1a7b511e7130b2dd7423c6ba9c6d37ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.610644 1790192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key ...
	I0127 12:49:12.610664 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key: {Name:mkd90fcc60d00c9236b383668f8a16c0de9554e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:12.614971 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem (1338 bytes)
	W0127 12:49:12.615016 1790192 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396_empty.pem, impossibly tiny 0 bytes
	I0127 12:49:12.615026 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:49:12.615065 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:49:12.615119 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:49:12.615159 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/key.pem (1675 bytes)
	I0127 12:49:12.615202 1790192 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem (1708 bytes)
	I0127 12:49:12.615902 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:49:12.642386 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:49:12.667109 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:49:12.688637 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:49:12.711307 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:49:12.732852 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:49:12.756599 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:49:12.812442 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/bridge-956477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:49:12.836060 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/certs/1731396.pem --> /usr/share/ca-certificates/1731396.pem (1338 bytes)
	I0127 12:49:12.857115 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/ssl/certs/17313962.pem --> /usr/share/ca-certificates/17313962.pem (1708 bytes)
	I0127 12:49:12.879108 1790192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-1724227/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:49:12.900872 1790192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:49:12.917407 1790192 ssh_runner.go:195] Run: openssl version
	I0127 12:49:12.922608 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1731396.pem && ln -fs /usr/share/ca-certificates/1731396.pem /etc/ssl/certs/1731396.pem"
	I0127 12:49:12.933376 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937409 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:32 /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.937451 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1731396.pem
	I0127 12:49:12.942881 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1731396.pem /etc/ssl/certs/51391683.0"
	I0127 12:49:12.953628 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17313962.pem && ln -fs /usr/share/ca-certificates/17313962.pem /etc/ssl/certs/17313962.pem"
	I0127 12:49:12.964554 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968534 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:32 /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.968581 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17313962.pem
	I0127 12:49:12.973893 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17313962.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:49:12.984546 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:49:12.994913 1790192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998791 1790192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:24 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:12.998841 1790192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:49:13.003870 1790192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:49:13.013262 1790192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:49:13.016784 1790192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:49:13.016833 1790192 kubeadm.go:392] StartCluster: {Name:bridge-956477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-956477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:49:13.016911 1790192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:49:13.016987 1790192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:49:13.050812 1790192 cri.go:89] found id: ""
	I0127 12:49:13.050889 1790192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:49:13.059865 1790192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:49:13.068783 1790192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:49:13.077676 1790192 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:49:13.077698 1790192 kubeadm.go:157] found existing configuration files:
	
	I0127 12:49:13.077743 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:49:13.086826 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:49:13.086886 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:49:13.096763 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:49:13.106090 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:49:13.106152 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:49:13.115056 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.123311 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:49:13.123381 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:49:13.134697 1790192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:49:13.145287 1790192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:49:13.145360 1790192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:49:13.156930 1790192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:49:13.215215 1790192 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:49:13.215384 1790192 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:49:13.321518 1790192 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:49:13.321678 1790192 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:49:13.321803 1790192 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:49:13.332363 1790192 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:49:13.473799 1790192 out.go:235]   - Generating certificates and keys ...
	I0127 12:49:13.473979 1790192 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:49:13.474081 1790192 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:49:13.685866 1790192 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:49:13.770778 1790192 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:49:14.148126 1790192 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:49:14.239549 1790192 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:49:14.286201 1790192 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:49:14.286341 1790192 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.383724 1790192 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:49:14.383950 1790192 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-956477 localhost] and IPs [192.168.72.28 127.0.0.1 ::1]
	I0127 12:49:14.501996 1790192 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:49:14.665536 1790192 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:49:14.804446 1790192 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:49:14.804529 1790192 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:49:14.897657 1790192 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:49:14.966489 1790192 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:49:15.104336 1790192 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:49:15.164491 1790192 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:49:15.350906 1790192 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:49:15.351563 1790192 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:49:15.354014 1790192 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:49:15.355551 1790192 out.go:235]   - Booting up control plane ...
	I0127 12:49:15.355691 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:49:15.355786 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:49:15.356057 1790192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:49:15.370685 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:49:15.376916 1790192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:49:15.377006 1790192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:49:15.515590 1790192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:49:15.515750 1790192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:49:16.516381 1790192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001998745s
	I0127 12:49:16.516512 1790192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:49:21.514222 1790192 kubeadm.go:310] [api-check] The API server is healthy after 5.001594227s
	I0127 12:49:21.532591 1790192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:49:21.554627 1790192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:49:21.596778 1790192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:49:21.597017 1790192 kubeadm.go:310] [mark-control-plane] Marking the node bridge-956477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:49:21.613382 1790192 kubeadm.go:310] [bootstrap-token] Using token: y217q3.atj9ddkanm9dqcqt
	I0127 12:49:21.614522 1790192 out.go:235]   - Configuring RBAC rules ...
	I0127 12:49:21.614665 1790192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:49:21.626049 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:49:21.635045 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:49:21.642711 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:49:21.646716 1790192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:49:21.650577 1790192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:49:21.921382 1790192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:49:22.339910 1790192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:49:22.920294 1790192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:49:22.921302 1790192 kubeadm.go:310] 
	I0127 12:49:22.921394 1790192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:49:22.921411 1790192 kubeadm.go:310] 
	I0127 12:49:22.921499 1790192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:49:22.921508 1790192 kubeadm.go:310] 
	I0127 12:49:22.921542 1790192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:49:22.921642 1790192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:49:22.921726 1790192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:49:22.921741 1790192 kubeadm.go:310] 
	I0127 12:49:22.921806 1790192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:49:22.921817 1790192 kubeadm.go:310] 
	I0127 12:49:22.921886 1790192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:49:22.921897 1790192 kubeadm.go:310] 
	I0127 12:49:22.921961 1790192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:49:22.922086 1790192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:49:22.922181 1790192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:49:22.922191 1790192 kubeadm.go:310] 
	I0127 12:49:22.922311 1790192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:49:22.922407 1790192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:49:22.922421 1790192 kubeadm.go:310] 
	I0127 12:49:22.922529 1790192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922664 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f \
	I0127 12:49:22.922701 1790192 kubeadm.go:310] 	--control-plane 
	I0127 12:49:22.922707 1790192 kubeadm.go:310] 
	I0127 12:49:22.922801 1790192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:49:22.922809 1790192 kubeadm.go:310] 
	I0127 12:49:22.922871 1790192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y217q3.atj9ddkanm9dqcqt \
	I0127 12:49:22.922996 1790192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b321ef7b3ab14f6d2ff43d6919f4b8b8c82a4707be0e2d90af4f9d1ba84cbc1f 
	I0127 12:49:22.923821 1790192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:49:22.924014 1790192 cni.go:84] Creating CNI manager for "bridge"
	I0127 12:49:22.926262 1790192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:49:22.927449 1790192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:49:22.937784 1790192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:49:22.955872 1790192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:49:22.955954 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:22.956000 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-956477 minikube.k8s.io/updated_at=2025_01_27T12_49_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=bridge-956477 minikube.k8s.io/primary=true
	I0127 12:49:22.984921 1790192 ops.go:34] apiserver oom_adj: -16
	I0127 12:49:23.101816 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:23.602076 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.102582 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:24.601942 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.102360 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:25.602350 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.102161 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:26.602794 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.102526 1790192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:49:27.237160 1790192 kubeadm.go:1113] duration metric: took 4.281277151s to wait for elevateKubeSystemPrivileges
	I0127 12:49:27.237200 1790192 kubeadm.go:394] duration metric: took 14.220369926s to StartCluster
	I0127 12:49:27.237228 1790192 settings.go:142] acquiring lock: {Name:mk5612abdbdf8001cdf3481ad7c8001a04d496dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.237320 1790192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:49:27.238783 1790192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-1724227/kubeconfig: {Name:mk9a903cebca75da3c74ff68f708e0a4e05f999b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:49:27.239069 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:49:27.239072 1790192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:49:27.239175 1790192 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:49:27.239310 1790192 addons.go:69] Setting storage-provisioner=true in profile "bridge-956477"
	I0127 12:49:27.239320 1790192 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:49:27.239330 1790192 addons.go:238] Setting addon storage-provisioner=true in "bridge-956477"
	I0127 12:49:27.239333 1790192 addons.go:69] Setting default-storageclass=true in profile "bridge-956477"
	I0127 12:49:27.239365 1790192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-956477"
	I0127 12:49:27.239371 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.239830 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239873 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.239917 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.239957 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.240680 1790192 out.go:177] * Verifying Kubernetes components...
	I0127 12:49:27.241931 1790192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:49:27.261385 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0127 12:49:27.261452 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0127 12:49:27.261810 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262003 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.262389 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262417 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262543 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.262563 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.262767 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262952 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.262989 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.263506 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.263537 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.266688 1790192 addons.go:238] Setting addon default-storageclass=true in "bridge-956477"
	I0127 12:49:27.266732 1790192 host.go:66] Checking if "bridge-956477" exists ...
	I0127 12:49:27.267120 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.267168 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.278963 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0127 12:49:27.279421 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.279976 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.279999 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.280431 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.280692 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.282702 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0127 12:49:27.282845 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.283179 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.283627 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.283649 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.283978 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.284748 1790192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:49:27.284785 1790192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:49:27.284797 1790192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:49:27.285956 1790192 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.285977 1790192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:49:27.286001 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.288697 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289087 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.289110 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.289304 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.289459 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.289574 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.289669 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.301672 1790192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0127 12:49:27.302317 1790192 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:49:27.302925 1790192 main.go:141] libmachine: Using API Version  1
	I0127 12:49:27.302949 1790192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:49:27.303263 1790192 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:49:27.303488 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetState
	I0127 12:49:27.305258 1790192 main.go:141] libmachine: (bridge-956477) Calling .DriverName
	I0127 12:49:27.305479 1790192 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:27.305497 1790192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:49:27.305517 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHHostname
	I0127 12:49:27.308750 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309243 1790192 main.go:141] libmachine: (bridge-956477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:99:d8", ip: ""} in network mk-bridge-956477: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:00 +0000 UTC Type:0 Mac:52:54:00:49:99:d8 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:bridge-956477 Clientid:01:52:54:00:49:99:d8}
	I0127 12:49:27.309269 1790192 main.go:141] libmachine: (bridge-956477) DBG | domain bridge-956477 has defined IP address 192.168.72.28 and MAC address 52:54:00:49:99:d8 in network mk-bridge-956477
	I0127 12:49:27.309409 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHPort
	I0127 12:49:27.309585 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHKeyPath
	I0127 12:49:27.309726 1790192 main.go:141] libmachine: (bridge-956477) Calling .GetSSHUsername
	I0127 12:49:27.309875 1790192 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/bridge-956477/id_rsa Username:docker}
	I0127 12:49:27.500640 1790192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:49:27.500778 1790192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:49:27.538353 1790192 node_ready.go:35] waiting up to 15m0s for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548400 1790192 node_ready.go:49] node "bridge-956477" has status "Ready":"True"
	I0127 12:49:27.548443 1790192 node_ready.go:38] duration metric: took 10.053639ms for node "bridge-956477" to be "Ready" ...
	I0127 12:49:27.548459 1790192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:27.564271 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:27.632137 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:49:27.647091 1790192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:49:28.184542 1790192 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 12:49:28.549638 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.549663 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550103 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550127 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550137 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550144 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.550198 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550409 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.550429 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.550443 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.550800 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.550816 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551057 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551076 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.551081 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.551085 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.551098 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.551316 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.551331 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575614 1790192 main.go:141] libmachine: Making call to close driver server
	I0127 12:49:28.575665 1790192 main.go:141] libmachine: (bridge-956477) Calling .Close
	I0127 12:49:28.575924 1790192 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:49:28.575979 1790192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:49:28.575978 1790192 main.go:141] libmachine: (bridge-956477) DBG | Closing plugin on server side
	I0127 12:49:28.577474 1790192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:49:28.578591 1790192 addons.go:514] duration metric: took 1.33943345s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:49:28.695806 1790192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-956477" context rescaled to 1 replicas
	I0127 12:49:29.570116 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:31.570640 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:33.572383 1790192 pod_ready.go:103] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:49:34.570677 1790192 pod_ready.go:98] pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.28 HostIPs:[{IP:192.168.72.
28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021ef1f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570712 1790192 pod_ready.go:82] duration metric: took 7.006412478s for pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace to be "Ready" ...
	E0127 12:49:34.570726 1790192 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-c87bh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 12:49:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
2.28 HostIPs:[{IP:192.168.72.28}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 12:49:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 12:49:28 +0000 UTC,FinishedAt:2025-01-27 12:49:34 +0000 UTC,ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f63238121640928054da5b75a8267e8c3b0ecb455be8db829959a17b2f86f494 Started:0xc0023f14c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021ef1e0} {Name:kube-api-access-j5rfl MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0021ef1f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 12:49:34.570736 1790192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575210 1790192 pod_ready.go:93] pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:34.575232 1790192 pod_ready.go:82] duration metric: took 4.46563ms for pod "coredns-668d6bf9bc-q9r6j" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:34.575241 1790192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082910 1790192 pod_ready.go:93] pod "etcd-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.082952 1790192 pod_ready.go:82] duration metric: took 1.507702821s for pod "etcd-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.082968 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086925 1790192 pod_ready.go:93] pod "kube-apiserver-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.086953 1790192 pod_ready.go:82] duration metric: took 3.975819ms for pod "kube-apiserver-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.086969 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091952 1790192 pod_ready.go:93] pod "kube-controller-manager-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.091969 1790192 pod_ready.go:82] duration metric: took 4.993389ms for pod "kube-controller-manager-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.091978 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170654 1790192 pod_ready.go:93] pod "kube-proxy-8fw2n" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.170678 1790192 pod_ready.go:82] duration metric: took 78.694605ms for pod "kube-proxy-8fw2n" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.170688 1790192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.568993 1790192 pod_ready.go:93] pod "kube-scheduler-bridge-956477" in "kube-system" namespace has status "Ready":"True"
	I0127 12:49:36.569019 1790192 pod_ready.go:82] duration metric: took 398.324568ms for pod "kube-scheduler-bridge-956477" in "kube-system" namespace to be "Ready" ...
	I0127 12:49:36.569029 1790192 pod_ready.go:39] duration metric: took 9.020555356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:49:36.569047 1790192 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:49:36.569110 1790192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:49:36.585221 1790192 api_server.go:72] duration metric: took 9.346111182s to wait for apiserver process to appear ...
	I0127 12:49:36.585260 1790192 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:49:36.585284 1790192 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0127 12:49:36.592716 1790192 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0127 12:49:36.594292 1790192 api_server.go:141] control plane version: v1.32.1
	I0127 12:49:36.594316 1790192 api_server.go:131] duration metric: took 9.04907ms to wait for apiserver health ...
	I0127 12:49:36.594325 1790192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:49:36.771302 1790192 system_pods.go:59] 7 kube-system pods found
	I0127 12:49:36.771341 1790192 system_pods.go:61] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:36.771347 1790192 system_pods.go:61] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:36.771353 1790192 system_pods.go:61] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:36.771358 1790192 system_pods.go:61] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:36.771363 1790192 system_pods.go:61] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:36.771368 1790192 system_pods.go:61] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:36.771372 1790192 system_pods.go:61] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:36.771382 1790192 system_pods.go:74] duration metric: took 177.049643ms to wait for pod list to return data ...
	I0127 12:49:36.771394 1790192 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:49:36.969860 1790192 default_sa.go:45] found service account: "default"
	I0127 12:49:36.969891 1790192 default_sa.go:55] duration metric: took 198.486144ms for default service account to be created ...
	I0127 12:49:36.969903 1790192 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:49:37.173813 1790192 system_pods.go:87] 7 kube-system pods found
	I0127 12:49:37.370364 1790192 system_pods.go:105] "coredns-668d6bf9bc-q9r6j" [999c9062-2e0b-476e-8cf2-f462a0280779] Running
	I0127 12:49:37.370390 1790192 system_pods.go:105] "etcd-bridge-956477" [d82e5e0c-3cd1-48bb-9d1f-574dbca5e0cc] Running
	I0127 12:49:37.370396 1790192 system_pods.go:105] "kube-apiserver-bridge-956477" [8cbb1927-3e41-4894-b646-a02b07cfc4da] Running
	I0127 12:49:37.370401 1790192 system_pods.go:105] "kube-controller-manager-bridge-956477" [1214913d-b397-4e00-9d3f-927a4e471293] Running
	I0127 12:49:37.370407 1790192 system_pods.go:105] "kube-proxy-8fw2n" [00316310-fd3c-4bb3-91e1-0e309ea0cade] Running
	I0127 12:49:37.370411 1790192 system_pods.go:105] "kube-scheduler-bridge-956477" [5f90f0d7-62a7-49d0-b28a-cef4e5713bc4] Running
	I0127 12:49:37.370415 1790192 system_pods.go:105] "storage-provisioner" [417b172b-04aa-4f1a-8439-e4b76228f1ca] Running
	I0127 12:49:37.370423 1790192 system_pods.go:147] duration metric: took 400.513222ms to wait for k8s-apps to be running ...
	I0127 12:49:37.370430 1790192 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:49:37.370476 1790192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:49:37.386578 1790192 system_svc.go:56] duration metric: took 16.134406ms WaitForService to wait for kubelet
	I0127 12:49:37.386609 1790192 kubeadm.go:582] duration metric: took 10.147508217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:49:37.386628 1790192 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:49:37.570387 1790192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:49:37.570420 1790192 node_conditions.go:123] node cpu capacity is 2
	I0127 12:49:37.570439 1790192 node_conditions.go:105] duration metric: took 183.805809ms to run NodePressure ...
	I0127 12:49:37.570455 1790192 start.go:241] waiting for startup goroutines ...
	I0127 12:49:37.570466 1790192 start.go:246] waiting for cluster config update ...
	I0127 12:49:37.570478 1790192 start.go:255] writing updated cluster config ...
	I0127 12:49:37.570833 1790192 ssh_runner.go:195] Run: rm -f paused
	I0127 12:49:37.621383 1790192 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:49:37.623996 1790192 out.go:177] * Done! kubectl is now configured to use "bridge-956477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.232872906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982626232842664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=933bf92c-ed51-4be3-b8cc-13ab8c7e85bc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.233537676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17043622-ad59-423c-9124-a350f22f5488 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.233599481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17043622-ad59-423c-9124-a350f22f5488 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.233631627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17043622-ad59-423c-9124-a350f22f5488 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.261107351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9b9bead-b8b0-4978-9c27-da04b45cc914 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.261178770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9b9bead-b8b0-4978-9c27-da04b45cc914 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.262407097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22b53094-4bfe-44ea-a16e-7f0290f8467b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.263032415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982626263007447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22b53094-4bfe-44ea-a16e-7f0290f8467b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.263610275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e862f6f9-2ff5-4ea5-866a-830b200290a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.263672506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e862f6f9-2ff5-4ea5-866a-830b200290a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.263710879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e862f6f9-2ff5-4ea5-866a-830b200290a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.292089061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f2fa38c-fd5f-481b-b3d2-664c92f63f10 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.292165555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f2fa38c-fd5f-481b-b3d2-664c92f63f10 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.293027083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bb2f7c2-2114-446e-ae87-4d6802486788 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.293393906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982626293372224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bb2f7c2-2114-446e-ae87-4d6802486788 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.293856037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8176731c-7c1a-43ae-85ec-709d61bee58d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.293975289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8176731c-7c1a-43ae-85ec-709d61bee58d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.294048032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8176731c-7c1a-43ae-85ec-709d61bee58d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.330754155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc8ced48-56ad-48b1-b927-68bf26a1b3b1 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.330867843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc8ced48-56ad-48b1-b927-68bf26a1b3b1 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.332042017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7806c333-064a-4399-8390-40eaca0382cb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.332427264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737982626332405337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7806c333-064a-4399-8390-40eaca0382cb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.333047425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d047765-0a85-4564-9190-bf9ac1d5f878 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.333102504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d047765-0a85-4564-9190-bf9ac1d5f878 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:57:06 old-k8s-version-488586 crio[629]: time="2025-01-27 12:57:06.333136178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d047765-0a85-4564-9190-bf9ac1d5f878 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.970481] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.025771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.448056] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.962494] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.061579] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076809] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.161479] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.142173] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.226136] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.196155] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820821] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Jan27 12:34] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 12:38] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan27 12:40] systemd-fstab-generator[5392]: Ignoring "noauto" option for root device
	[  +0.064559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:57:06 up 23 min,  0 users,  load average: 0.09, 0.04, 0.05
	Linux old-k8s-version-488586 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008626f0)
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a7fef0, 0x4f0ac20, 0xc0003dd6d0, 0x1, 0xc00009e0c0)
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009e42a0, 0xc00009e0c0)
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00097db30, 0xc0009fa9e0)
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 12:57:03 old-k8s-version-488586 kubelet[7224]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 12:57:03 old-k8s-version-488586 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 12:57:03 old-k8s-version-488586 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 12:57:04 old-k8s-version-488586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 177.
	Jan 27 12:57:04 old-k8s-version-488586 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 12:57:04 old-k8s-version-488586 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 12:57:04 old-k8s-version-488586 kubelet[7233]: I0127 12:57:04.641559    7233 server.go:416] Version: v1.20.0
	Jan 27 12:57:04 old-k8s-version-488586 kubelet[7233]: I0127 12:57:04.641886    7233 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 12:57:04 old-k8s-version-488586 kubelet[7233]: I0127 12:57:04.643819    7233 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 12:57:04 old-k8s-version-488586 kubelet[7233]: W0127 12:57:04.644686    7233 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 12:57:04 old-k8s-version-488586 kubelet[7233]: I0127 12:57:04.644868    7233 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 2 (224.770857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-488586" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.78s)

                                                
                                    

Test pass (261/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 12.33
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.61
18 TestDownloadOnly/v1.32.1/DeleteAll 0.15
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 80.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.55
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.49
35 TestAddons/parallel/Registry 16.96
37 TestAddons/parallel/InspektorGadget 11.89
38 TestAddons/parallel/MetricsServer 6.82
40 TestAddons/parallel/CSI 41.08
41 TestAddons/parallel/Headlamp 20.81
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 57.11
44 TestAddons/parallel/NvidiaDevicePlugin 6.96
45 TestAddons/parallel/Yakd 11.81
47 TestAddons/StoppedEnableDisable 91.25
48 TestCertOptions 78.55
49 TestCertExpiration 286.89
51 TestForceSystemdFlag 78.92
52 TestForceSystemdEnv 40.51
54 TestKVMDriverInstallOrUpdate 3.85
58 TestErrorSpam/setup 37.87
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.6
63 TestErrorSpam/stop 5.44
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.69
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.06
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.61
75 TestFunctional/serial/CacheCmd/cache/add_local 2.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 34.89
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.27
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 40.5
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.9
97 TestFunctional/parallel/ServiceCmdConnect 9.54
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 51.84
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.49
103 TestFunctional/parallel/MySQL 27.65
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.67
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
113 TestFunctional/parallel/License 0.55
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.81
126 TestFunctional/parallel/ImageCommands/ImageListShort 1.61
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
130 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
131 TestFunctional/parallel/ImageCommands/Setup 1.72
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
139 TestFunctional/parallel/ServiceCmd/List 0.28
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
143 TestFunctional/parallel/ProfileCmd/profile_list 0.36
144 TestFunctional/parallel/ServiceCmd/Format 0.33
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
146 TestFunctional/parallel/ServiceCmd/URL 0.36
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
150 TestFunctional/parallel/MountCmd/any-port 26.72
151 TestFunctional/parallel/MountCmd/specific-port 2.2
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 197.64
160 TestMultiControlPlane/serial/DeployApp 6.07
161 TestMultiControlPlane/serial/PingHostFromPods 1.12
162 TestMultiControlPlane/serial/AddWorkerNode 59.21
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
165 TestMultiControlPlane/serial/CopyFile 12.93
166 TestMultiControlPlane/serial/StopSecondaryNode 91.62
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
168 TestMultiControlPlane/serial/RestartSecondaryNode 47.56
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 424.77
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.11
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
173 TestMultiControlPlane/serial/StopCluster 272.89
174 TestMultiControlPlane/serial/RestartCluster 103.2
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 77.4
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 77.97
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.34
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 90.08
213 TestMountStart/serial/StartWithMountFirst 24.58
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 28.16
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.89
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 21.47
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 107.27
225 TestMultiNode/serial/DeployApp2Nodes 5.58
226 TestMultiNode/serial/PingHostFrom2Pods 0.76
227 TestMultiNode/serial/AddNode 46.74
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.56
230 TestMultiNode/serial/CopyFile 7.12
231 TestMultiNode/serial/StopNode 2.26
232 TestMultiNode/serial/StartAfterStop 37.97
233 TestMultiNode/serial/RestartKeepsNodes 328.09
234 TestMultiNode/serial/DeleteNode 2.68
235 TestMultiNode/serial/StopMultiNode 181.83
236 TestMultiNode/serial/RestartMultiNode 112.93
237 TestMultiNode/serial/ValidateNameConflict 41.84
244 TestScheduledStopUnix 116.92
248 TestRunningBinaryUpgrade 214.61
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 90.78
255 TestStoppedBinaryUpgrade/Setup 2.74
256 TestStoppedBinaryUpgrade/Upgrade 134.83
257 TestNoKubernetes/serial/StartWithStopK8s 59.54
258 TestNoKubernetes/serial/Start 28.98
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
260 TestNoKubernetes/serial/ProfileList 29.89
261 TestNoKubernetes/serial/Stop 1.31
262 TestNoKubernetes/serial/StartNoArgs 22.42
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
279 TestNetworkPlugins/group/false 3.1
284 TestPause/serial/Start 82.65
289 TestStartStop/group/no-preload/serial/FirstStart 93.5
291 TestStartStop/group/embed-certs/serial/FirstStart 60.75
292 TestStartStop/group/embed-certs/serial/DeployApp 10.28
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.91
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
296 TestStartStop/group/embed-certs/serial/Stop 91
297 TestStartStop/group/no-preload/serial/DeployApp 10.27
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
299 TestStartStop/group/no-preload/serial/Stop 90.99
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.25
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.04
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/embed-certs/serial/SecondStart 295.95
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
311 TestStartStop/group/old-k8s-version/serial/Stop 3.58
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
317 TestStartStop/group/embed-certs/serial/Pause 2.48
319 TestStartStop/group/newest-cni/serial/FirstStart 46.13
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
322 TestStartStop/group/newest-cni/serial/Stop 10.33
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/newest-cni/serial/SecondStart 36.28
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
328 TestStartStop/group/newest-cni/serial/Pause 2.25
329 TestNetworkPlugins/group/auto/Start 81.81
330 TestNetworkPlugins/group/auto/KubeletFlags 0.21
331 TestNetworkPlugins/group/auto/NetCatPod 11.22
332 TestNetworkPlugins/group/auto/DNS 0.14
333 TestNetworkPlugins/group/auto/Localhost 0.11
334 TestNetworkPlugins/group/auto/HairPin 0.11
335 TestNetworkPlugins/group/kindnet/Start 58.16
336 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
338 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
339 TestNetworkPlugins/group/kindnet/DNS 0.14
340 TestNetworkPlugins/group/kindnet/Localhost 0.11
341 TestNetworkPlugins/group/kindnet/HairPin 0.12
342 TestNetworkPlugins/group/calico/Start 80.6
344 TestNetworkPlugins/group/calico/ControllerPod 6.01
345 TestNetworkPlugins/group/calico/KubeletFlags 0.21
346 TestNetworkPlugins/group/calico/NetCatPod 12.2
347 TestNetworkPlugins/group/calico/DNS 0.14
348 TestNetworkPlugins/group/calico/Localhost 0.12
349 TestNetworkPlugins/group/calico/HairPin 0.12
350 TestNetworkPlugins/group/custom-flannel/Start 68.46
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
353 TestNetworkPlugins/group/custom-flannel/DNS 0.14
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
356 TestNetworkPlugins/group/enable-default-cni/Start 76.49
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
362 TestNetworkPlugins/group/flannel/Start 70.7
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
365 TestNetworkPlugins/group/flannel/NetCatPod 10.22
366 TestNetworkPlugins/group/flannel/DNS 0.13
367 TestNetworkPlugins/group/flannel/Localhost 0.14
368 TestNetworkPlugins/group/flannel/HairPin 0.11
369 TestNetworkPlugins/group/bridge/Start 52.62
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
371 TestNetworkPlugins/group/bridge/NetCatPod 11.21
372 TestNetworkPlugins/group/bridge/DNS 0.14
373 TestNetworkPlugins/group/bridge/Localhost 0.13
374 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (22.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-929502 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-929502 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.389738918s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 11:24:07.649103 1731396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 11:24:07.649204 1731396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-929502
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-929502: exit status 85 (63.525934ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-929502 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC |          |
	|         | -p download-only-929502        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:23:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:23:45.302725 1731408 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:23:45.302908 1731408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:45.302923 1731408 out.go:358] Setting ErrFile to fd 2...
	I0127 11:23:45.302930 1731408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:45.303098 1731408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	W0127 11:23:45.303227 1731408 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20318-1724227/.minikube/config/config.json: open /home/jenkins/minikube-integration/20318-1724227/.minikube/config/config.json: no such file or directory
	I0127 11:23:45.303766 1731408 out.go:352] Setting JSON to true
	I0127 11:23:45.304774 1731408 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":29166,"bootTime":1737947859,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:23:45.304886 1731408 start.go:139] virtualization: kvm guest
	I0127 11:23:45.306983 1731408 out.go:97] [download-only-929502] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 11:23:45.307087 1731408 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 11:23:45.307161 1731408 notify.go:220] Checking for updates...
	I0127 11:23:45.308119 1731408 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:23:45.309291 1731408 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:23:45.310525 1731408 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:23:45.311719 1731408 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:23:45.312962 1731408 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 11:23:45.314941 1731408 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:23:45.315155 1731408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:23:45.348361 1731408 out.go:97] Using the kvm2 driver based on user configuration
	I0127 11:23:45.348384 1731408 start.go:297] selected driver: kvm2
	I0127 11:23:45.348390 1731408 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:23:45.348677 1731408 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:23:45.348753 1731408 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:23:45.363492 1731408 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:23:45.363542 1731408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:23:45.364191 1731408 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 11:23:45.364380 1731408 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:23:45.364419 1731408 cni.go:84] Creating CNI manager for ""
	I0127 11:23:45.364482 1731408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:23:45.364499 1731408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:23:45.364561 1731408 start.go:340] cluster config:
	{Name:download-only-929502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-929502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:23:45.364802 1731408 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:23:45.366256 1731408 out.go:97] Downloading VM boot image ...
	I0127 11:23:45.366298 1731408 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:23:54.566061 1731408 out.go:97] Starting "download-only-929502" primary control-plane node in "download-only-929502" cluster
	I0127 11:23:54.566088 1731408 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:23:54.676996 1731408 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:23:54.677055 1731408 cache.go:56] Caching tarball of preloaded images
	I0127 11:23:54.677261 1731408 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:23:54.678954 1731408 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 11:23:54.678974 1731408 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 11:23:54.777863 1731408 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-929502 host does not exist
	  To start a cluster, run: "minikube start -p download-only-929502"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-929502
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (12.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-006941 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-006941 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.326012279s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (12.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 11:24:20.321461 1731396 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 11:24:20.321515 1731396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-006941
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-006941: exit status 85 (611.984899ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-929502 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC |                     |
	|         | -p download-only-929502        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| delete  | -p download-only-929502        | download-only-929502 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| start   | -o=json --download-only        | download-only-006941 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	|         | -p download-only-006941        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:24:08
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:24:08.036319 1731648 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:24:08.036602 1731648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:24:08.036613 1731648 out.go:358] Setting ErrFile to fd 2...
	I0127 11:24:08.036619 1731648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:24:08.036803 1731648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:24:08.037407 1731648 out.go:352] Setting JSON to true
	I0127 11:24:08.038357 1731648 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":29189,"bootTime":1737947859,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:24:08.038481 1731648 start.go:139] virtualization: kvm guest
	I0127 11:24:08.040387 1731648 out.go:97] [download-only-006941] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:24:08.040533 1731648 notify.go:220] Checking for updates...
	I0127 11:24:08.041751 1731648 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:24:08.042893 1731648 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:24:08.044059 1731648 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:24:08.045338 1731648 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:24:08.046338 1731648 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 11:24:08.048408 1731648 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:24:08.048617 1731648 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:24:08.079531 1731648 out.go:97] Using the kvm2 driver based on user configuration
	I0127 11:24:08.079563 1731648 start.go:297] selected driver: kvm2
	I0127 11:24:08.079572 1731648 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:24:08.079867 1731648 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:24:08.079974 1731648 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-1724227/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:24:08.094564 1731648 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:24:08.094615 1731648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:24:08.095248 1731648 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 11:24:08.095436 1731648 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:24:08.095471 1731648 cni.go:84] Creating CNI manager for ""
	I0127 11:24:08.095547 1731648 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:24:08.095561 1731648 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:24:08.095629 1731648 start.go:340] cluster config:
	{Name:download-only-006941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-006941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:24:08.095772 1731648 iso.go:125] acquiring lock: {Name:mkadfcca31fda677c8c62cad1d325dd2bd0a2473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:24:08.097110 1731648 out.go:97] Starting "download-only-006941" primary control-plane node in "download-only-006941" cluster
	I0127 11:24:08.097129 1731648 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:24:08.601606 1731648 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:24:08.601669 1731648 cache.go:56] Caching tarball of preloaded images
	I0127 11:24:08.601898 1731648 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:24:08.603512 1731648 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 11:24:08.603528 1731648 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 11:24:08.703611 1731648 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20318-1724227/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-006941 host does not exist
	  To start a cluster, run: "minikube start -p download-only-006941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-006941
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 11:24:21.478528 1731396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-922671 --alsologtostderr --binary-mirror http://127.0.0.1:45489 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-922671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-922671
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (80.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-266554 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-266554 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.257841877s)
helpers_test.go:175: Cleaning up "offline-crio-266554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-266554
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-266554: (1.444552001s)
--- PASS: TestOffline (80.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-010792
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-010792: exit status 85 (52.571065ms)

                                                
                                                
-- stdout --
	* Profile "addons-010792" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010792"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-010792
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-010792: exit status 85 (51.567385ms)

                                                
                                                
-- stdout --
	* Profile "addons-010792" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010792"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-010792 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-010792 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.55025705s)
--- PASS: TestAddons/Setup (133.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-010792 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-010792 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-010792 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-010792 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f71e8580-dbab-4556-a5d9-8525eb0f75d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f71e8580-dbab-4556-a5d9-8525eb0f75d4] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004570621s
addons_test.go:633: (dbg) Run:  kubectl --context addons-010792 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-010792 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-010792 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 34.959327ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-2rdf7" [3d0c6731-428a-4f72-bdcf-d9af53b4e161] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003441115s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nr6jq" [ce9ee54b-9eed-45c9-897f-850f5632d1a5] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007350096s
addons_test.go:331: (dbg) Run:  kubectl --context addons-010792 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-010792 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-010792 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.174442461s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 ip
2025/01/27 11:27:12 [DEBUG] GET http://192.168.39.45:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9vhl9" [b469ae3b-3a4f-4b78-8dcf-46b6be0ed867] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00464727s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable inspektor-gadget --alsologtostderr -v=1: (5.880483652s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.46286ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-sqf8v" [df1191e5-c231-47c1-8b91-358cc72cdd03] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004939025s
addons_test.go:402: (dbg) Run:  kubectl --context addons-010792 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 11:27:14.722253 1731396 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 11:27:14.726786 1731396 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 11:27:14.726820 1731396 kapi.go:107] duration metric: took 4.587458ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.598134ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-010792 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-010792 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d0cab711-e1f3-4dbd-bf58-dfb47c6451c6] Pending
helpers_test.go:344: "task-pv-pod" [d0cab711-e1f3-4dbd-bf58-dfb47c6451c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d0cab711-e1f3-4dbd-bf58-dfb47c6451c6] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003787476s
addons_test.go:511: (dbg) Run:  kubectl --context addons-010792 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-010792 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-010792 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-010792 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-010792 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-010792 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-010792 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [27ec2a13-e7ed-4282-85ee-13264d989ae2] Pending
helpers_test.go:344: "task-pv-pod-restore" [27ec2a13-e7ed-4282-85ee-13264d989ae2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004019473s
addons_test.go:553: (dbg) Run:  kubectl --context addons-010792 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-010792 delete pod task-pv-pod-restore: (1.616160045s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-010792 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-010792 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable volumesnapshots --alsologtostderr -v=1: (1.018107703s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.767039015s)
--- PASS: TestAddons/parallel/CSI (41.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-010792 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-010792 --alsologtostderr -v=1: (1.108081703s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-btpks" [de2bb212-e850-4330-84e1-916fbd06e500] Pending
helpers_test.go:344: "headlamp-69d78d796f-btpks" [de2bb212-e850-4330-84e1-916fbd06e500] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-btpks" [de2bb212-e850-4330-84e1-916fbd06e500] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003619688s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable headlamp --alsologtostderr -v=1: (5.700299593s)
--- PASS: TestAddons/parallel/Headlamp (20.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-4rqpp" [46868e26-eab0-4e19-9333-fe1716a5f3b8] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003542309s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-010792 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-010792 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [338477dc-a647-4285-8245-e5c50a57072f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [338477dc-a647-4285-8245-e5c50a57072f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [338477dc-a647-4285-8245-e5c50a57072f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003789253s
addons_test.go:906: (dbg) Run:  kubectl --context addons-010792 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 ssh "cat /opt/local-path-provisioner/pvc-4b8022cd-4161-4f38-be88-efcf1f11f636_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-010792 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-010792 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.277805858s)
--- PASS: TestAddons/parallel/LocalPath (57.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sdq9s" [aae0f9ef-186d-454f-bead-016953edfdbe] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004638348s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-6sg6m" [97cf7fe9-ad47-41d0-9111-e7e2c817d258] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003672234s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-010792 addons disable yakd --alsologtostderr -v=1: (5.808220496s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-010792
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-010792: (1m30.957013622s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-010792
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-010792
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-010792
--- PASS: TestAddons/StoppedEnableDisable (91.25s)

                                                
                                    
x
+
TestCertOptions (78.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-324519 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-324519 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m17.093614554s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-324519 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-324519 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-324519 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-324519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-324519
--- PASS: TestCertOptions (78.55s)

                                                
                                    
x
+
TestCertExpiration (286.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-103712 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-103712 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.689118312s)
E0127 12:26:36.326868 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-103712 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0127 12:29:50.074436 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-103712 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (49.391939549s)
helpers_test.go:175: Cleaning up "cert-expiration-103712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-103712
--- PASS: TestCertExpiration (286.89s)

                                                
                                    
x
+
TestForceSystemdFlag (78.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-980891 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-980891 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.922035624s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-980891 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-980891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-980891
--- PASS: TestForceSystemdFlag (78.92s)

                                                
                                    
x
+
TestForceSystemdEnv (40.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-303464 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-303464 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (39.734649667s)
helpers_test.go:175: Cleaning up "force-systemd-env-303464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-303464
--- PASS: TestForceSystemdEnv (40.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.85s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 12:26:41.762327 1731396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:26:41.762492 1731396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 12:26:41.791772 1731396 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 12:26:41.792176 1731396 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:26:41.792257 1731396 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2855746496/001/docker-machine-driver-kvm2
I0127 12:26:41.995414 1731396 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2855746496/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0] Decompressors:map[bz2:0xc000015950 gz:0xc000015958 tar:0xc000015900 tar.bz2:0xc000015910 tar.gz:0xc000015920 tar.xz:0xc000015930 tar.zst:0xc000015940 tbz2:0xc000015910 tgz:0xc000015920 txz:0xc000015930 tzst:0xc000015940 xz:0xc000015960 zip:0xc000015970 zst:0xc000015968] Getters:map[file:0xc001575830 http:0xc0007a6320 https:0xc0007a6370] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:26:41.995459 1731396 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2855746496/001/docker-machine-driver-kvm2
I0127 12:26:43.902437 1731396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:26:43.902524 1731396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 12:26:43.933771 1731396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 12:26:43.933802 1731396 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 12:26:43.933877 1731396 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:26:43.933906 1731396 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2855746496/002/docker-machine-driver-kvm2
I0127 12:26:43.969359 1731396 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2855746496/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0] Decompressors:map[bz2:0xc000015950 gz:0xc000015958 tar:0xc000015900 tar.bz2:0xc000015910 tar.gz:0xc000015920 tar.xz:0xc000015930 tar.zst:0xc000015940 tbz2:0xc000015910 tgz:0xc000015920 txz:0xc000015930 tzst:0xc000015940 xz:0xc000015960 zip:0xc000015970 zst:0xc000015968] Getters:map[file:0xc001c30a60 http:0xc00075f3b0 https:0xc00075f400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:26:43.969396 1731396 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2855746496/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.85s)

                                                
                                    
x
+
TestErrorSpam/setup (37.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-500809 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-500809 --driver=kvm2  --container-runtime=crio
E0127 11:31:36.327352 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.333772 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.345147 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.366524 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.407953 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.489384 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.650882 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:36.972654 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:37.614778 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:38.896476 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:41.459410 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:46.580956 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:56.822586 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-500809 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-500809 --driver=kvm2  --container-runtime=crio: (37.868722844s)
--- PASS: TestErrorSpam/setup (37.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (5.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop
E0127 11:32:17.304840 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop: (2.284768211s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop: (1.587607877s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-500809 --log_dir /tmp/nospam-500809 stop: (1.564563216s)
--- PASS: TestErrorSpam/stop (5.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20318-1724227/.minikube/files/etc/test/nested/copy/1731396/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0127 11:32:58.266994 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-977534 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.694015695s)
--- PASS: TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 11:33:43.652034 1731396 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-977534 --alsologtostderr -v=8: (33.054915939s)
functional_test.go:663: soft start took 33.055798655s for "functional-977534" cluster.
I0127 11:34:16.707364 1731396 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (33.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-977534 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:3.1: (1.219640192s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:3.3: (1.295619574s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:latest
E0127 11:34:20.188485 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 cache add registry.k8s.io/pause:latest: (1.092523872s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-977534 /tmp/TestFunctionalserialCacheCmdcacheadd_local4166836924/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache add minikube-local-cache-test:functional-977534
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 cache add minikube-local-cache-test:functional-977534: (1.781323302s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache delete minikube-local-cache-test:functional-977534
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-977534
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.643774ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 cache reload: (1.022046472s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 kubectl -- --context functional-977534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-977534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-977534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.885622986s)
functional_test.go:761: restart took 34.885733597s for "functional-977534" cluster.
I0127 11:34:59.767955 1731396 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (34.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-977534 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 logs: (1.316681185s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 logs --file /tmp/TestFunctionalserialLogsFileCmd2700569205/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 logs --file /tmp/TestFunctionalserialLogsFileCmd2700569205/001/logs.txt: (1.368498552s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-977534 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-977534
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-977534: exit status 115 (279.080619ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.48:32722 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-977534 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 config get cpus: exit status 14 (61.735262ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 config get cpus: exit status 14 (65.595484ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (40.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-977534 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-977534 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1739522: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (40.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-977534 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.798671ms)

                                                
                                                
-- stdout --
	* [functional-977534] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:35:17.489570 1738762 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:35:17.489705 1738762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:17.489718 1738762 out.go:358] Setting ErrFile to fd 2...
	I0127 11:35:17.489724 1738762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:17.490052 1738762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:35:17.490803 1738762 out.go:352] Setting JSON to false
	I0127 11:35:17.492164 1738762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":29858,"bootTime":1737947859,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:35:17.492315 1738762 start.go:139] virtualization: kvm guest
	I0127 11:35:17.494527 1738762 out.go:177] * [functional-977534] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:35:17.495786 1738762 notify.go:220] Checking for updates...
	I0127 11:35:17.495825 1738762 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:35:17.497199 1738762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:35:17.498649 1738762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:35:17.500623 1738762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:35:17.501875 1738762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:35:17.503105 1738762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:35:17.504844 1738762 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:35:17.505390 1738762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:35:17.505474 1738762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:35:17.524614 1738762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0127 11:35:17.525115 1738762 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:35:17.525669 1738762 main.go:141] libmachine: Using API Version  1
	I0127 11:35:17.525691 1738762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:35:17.525996 1738762 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:35:17.526268 1738762 main.go:141] libmachine: (functional-977534) Calling .DriverName
	I0127 11:35:17.526561 1738762 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:35:17.527009 1738762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:35:17.527080 1738762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:35:17.544103 1738762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I0127 11:35:17.544643 1738762 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:35:17.545254 1738762 main.go:141] libmachine: Using API Version  1
	I0127 11:35:17.545275 1738762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:35:17.545613 1738762 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:35:17.545874 1738762 main.go:141] libmachine: (functional-977534) Calling .DriverName
	I0127 11:35:17.587017 1738762 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:35:17.588037 1738762 start.go:297] selected driver: kvm2
	I0127 11:35:17.588055 1738762 start.go:901] validating driver "kvm2" against &{Name:functional-977534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-977534 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:35:17.588153 1738762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:35:17.590439 1738762 out.go:201] 
	W0127 11:35:17.591743 1738762 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 11:35:17.592901 1738762 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977534 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-977534 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.014377ms)

                                                
                                                
-- stdout --
	* [functional-977534] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:35:17.804542 1738899 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:35:17.804858 1738899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:17.804870 1738899 out.go:358] Setting ErrFile to fd 2...
	I0127 11:35:17.804875 1738899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:17.805199 1738899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:35:17.805786 1738899 out.go:352] Setting JSON to false
	I0127 11:35:17.806860 1738899 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":29859,"bootTime":1737947859,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:35:17.806975 1738899 start.go:139] virtualization: kvm guest
	I0127 11:35:17.809991 1738899 out.go:177] * [functional-977534] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 11:35:17.811388 1738899 notify.go:220] Checking for updates...
	I0127 11:35:17.811417 1738899 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:35:17.812660 1738899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:35:17.815289 1738899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 11:35:17.816453 1738899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 11:35:17.817702 1738899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:35:17.818808 1738899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:35:17.820351 1738899 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:35:17.820749 1738899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:35:17.820803 1738899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:35:17.845865 1738899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0127 11:35:17.846369 1738899 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:35:17.847079 1738899 main.go:141] libmachine: Using API Version  1
	I0127 11:35:17.847110 1738899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:35:17.847417 1738899 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:35:17.847773 1738899 main.go:141] libmachine: (functional-977534) Calling .DriverName
	I0127 11:35:17.848033 1738899 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:35:17.848375 1738899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:35:17.848418 1738899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:35:17.864322 1738899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0127 11:35:17.864748 1738899 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:35:17.865218 1738899 main.go:141] libmachine: Using API Version  1
	I0127 11:35:17.865246 1738899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:35:17.865624 1738899 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:35:17.865817 1738899 main.go:141] libmachine: (functional-977534) Calling .DriverName
	I0127 11:35:17.901645 1738899 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 11:35:17.902879 1738899 start.go:297] selected driver: kvm2
	I0127 11:35:17.902901 1738899 start.go:901] validating driver "kvm2" against &{Name:functional-977534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-977534 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:35:17.903026 1738899 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:35:17.904958 1738899 out.go:201] 
	W0127 11:35:17.906227 1738899 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 11:35:17.907352 1738899 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-977534 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-977534 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-c2cxw" [88b0d630-d06c-4faa-8efc-2c4943b4537a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-c2cxw" [88b0d630-d06c-4faa-8efc-2c4943b4537a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003780998s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.48:32267
functional_test.go:1675: http://192.168.39.48:32267: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-c2cxw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.48:32267
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1e26df9b-e7e9-4394-9990-eaeaa7f1685b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003927096s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-977534 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-977534 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-977534 get pvc myclaim -o=json
I0127 11:35:12.935257 1731396 retry.go:31] will retry after 2.526918965s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2b57efce-cc3b-4bff-9197-d8b21c02e220 ResourceVersion:764 Generation:0 CreationTimestamp:2025-01-27 11:35:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ac8360 VolumeMode:0xc001ac8370 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-977534 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-977534 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92b11258-a45a-4765-b8d0-e9003fc93387] Pending
helpers_test.go:344: "sp-pod" [92b11258-a45a-4765-b8d0-e9003fc93387] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92b11258-a45a-4765-b8d0-e9003fc93387] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003929068s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-977534 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-977534 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-977534 delete -f testdata/storage-provisioner/pod.yaml: (2.85253849s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-977534 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c684407a-b598-45ac-9579-ad38d4c9d428] Pending
helpers_test.go:344: "sp-pod" [c684407a-b598-45ac-9579-ad38d4c9d428] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c684407a-b598-45ac-9579-ad38d4c9d428] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.003972853s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-977534 exec sp-pod -- ls /tmp/mount
2025/01/27 11:35:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh -n functional-977534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cp functional-977534:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3759860949/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh -n functional-977534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh -n functional-977534 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-977534 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-wbqnb" [3c60e242-3622-433b-b046-c057aabe5a91] Pending
helpers_test.go:344: "mysql-58ccfd96bb-wbqnb" [3c60e242-3622-433b-b046-c057aabe5a91] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-wbqnb" [3c60e242-3622-433b-b046-c057aabe5a91] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004073343s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-977534 exec mysql-58ccfd96bb-wbqnb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-977534 exec mysql-58ccfd96bb-wbqnb -- mysql -ppassword -e "show databases;": exit status 1 (198.409915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:35:45.285360 1731396 retry.go:31] will retry after 1.023321116s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-977534 exec mysql-58ccfd96bb-wbqnb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1731396/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /etc/test/nested/copy/1731396/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1731396.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /etc/ssl/certs/1731396.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1731396.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /usr/share/ca-certificates/1731396.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17313962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /etc/ssl/certs/17313962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17313962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /usr/share/ca-certificates/17313962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-977534 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "sudo systemctl is-active docker": exit status 1 (262.796199ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "sudo systemctl is-active containerd": exit status 1 (273.029175ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-977534 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-977534 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-5pvj6" [f1a084e7-7eb1-4b44-bece-b47856e89c42] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-5pvj6" [f1a084e7-7eb1-4b44-bece-b47856e89c42] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003178052s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 image ls --format short --alsologtostderr: (1.612727593s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977534 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-977534
localhost/kicbase/echo-server:functional-977534
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977534 image ls --format short --alsologtostderr:
I0127 11:35:47.439700 1739943 out.go:345] Setting OutFile to fd 1 ...
I0127 11:35:47.439963 1739943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:47.439973 1739943 out.go:358] Setting ErrFile to fd 2...
I0127 11:35:47.439978 1739943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:47.440221 1739943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
I0127 11:35:47.440895 1739943 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:47.441010 1739943 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:47.441381 1739943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:47.441454 1739943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:47.457476 1739943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
I0127 11:35:47.457993 1739943 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:47.458683 1739943 main.go:141] libmachine: Using API Version  1
I0127 11:35:47.458705 1739943 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:47.459107 1739943 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:47.459305 1739943 main.go:141] libmachine: (functional-977534) Calling .GetState
I0127 11:35:47.461171 1739943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:47.461216 1739943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:47.476011 1739943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
I0127 11:35:47.476505 1739943 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:47.477191 1739943 main.go:141] libmachine: Using API Version  1
I0127 11:35:47.477223 1739943 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:47.477540 1739943 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:47.477720 1739943 main.go:141] libmachine: (functional-977534) Calling .DriverName
I0127 11:35:47.478000 1739943 ssh_runner.go:195] Run: systemctl --version
I0127 11:35:47.478051 1739943 main.go:141] libmachine: (functional-977534) Calling .GetSSHHostname
I0127 11:35:47.480833 1739943 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:47.481255 1739943 main.go:141] libmachine: (functional-977534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:1d:0f", ip: ""} in network mk-functional-977534: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:36 +0000 UTC Type:0 Mac:52:54:00:56:1d:0f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-977534 Clientid:01:52:54:00:56:1d:0f}
I0127 11:35:47.481290 1739943 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined IP address 192.168.39.48 and MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:47.481407 1739943 main.go:141] libmachine: (functional-977534) Calling .GetSSHPort
I0127 11:35:47.481592 1739943 main.go:141] libmachine: (functional-977534) Calling .GetSSHKeyPath
I0127 11:35:47.481777 1739943 main.go:141] libmachine: (functional-977534) Calling .GetSSHUsername
I0127 11:35:47.481953 1739943 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/functional-977534/id_rsa Username:docker}
I0127 11:35:47.599681 1739943 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:35:48.996724 1739943 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.396992518s)
I0127 11:35:48.997105 1739943 main.go:141] libmachine: Making call to close driver server
I0127 11:35:48.997120 1739943 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:48.997378 1739943 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:48.997398 1739943 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:35:48.997401 1739943 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:48.997406 1739943 main.go:141] libmachine: Making call to close driver server
I0127 11:35:48.997420 1739943 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:48.997658 1739943 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:48.997681 1739943 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977534 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-977534  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-977534  | 7d0e234a56a09 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977534 image ls --format table --alsologtostderr:
I0127 11:35:49.313599 1740196 out.go:345] Setting OutFile to fd 1 ...
I0127 11:35:49.313839 1740196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.313850 1740196 out.go:358] Setting ErrFile to fd 2...
I0127 11:35:49.313853 1740196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.314054 1740196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
I0127 11:35:49.314655 1740196 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.314781 1740196 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.315196 1740196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.315273 1740196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.332037 1740196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33671
I0127 11:35:49.332461 1740196 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.333010 1740196 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.333034 1740196 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.333375 1740196 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.333605 1740196 main.go:141] libmachine: (functional-977534) Calling .GetState
I0127 11:35:49.335437 1740196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.335479 1740196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.351606 1740196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
I0127 11:35:49.352114 1740196 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.352612 1740196 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.352636 1740196 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.353095 1740196 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.353343 1740196 main.go:141] libmachine: (functional-977534) Calling .DriverName
I0127 11:35:49.353537 1740196 ssh_runner.go:195] Run: systemctl --version
I0127 11:35:49.353576 1740196 main.go:141] libmachine: (functional-977534) Calling .GetSSHHostname
I0127 11:35:49.356448 1740196 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.356845 1740196 main.go:141] libmachine: (functional-977534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:1d:0f", ip: ""} in network mk-functional-977534: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:36 +0000 UTC Type:0 Mac:52:54:00:56:1d:0f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-977534 Clientid:01:52:54:00:56:1d:0f}
I0127 11:35:49.356877 1740196 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined IP address 192.168.39.48 and MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.356940 1740196 main.go:141] libmachine: (functional-977534) Calling .GetSSHPort
I0127 11:35:49.357185 1740196 main.go:141] libmachine: (functional-977534) Calling .GetSSHKeyPath
I0127 11:35:49.357315 1740196 main.go:141] libmachine: (functional-977534) Calling .GetSSHUsername
I0127 11:35:49.357427 1740196 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/functional-977534/id_rsa Username:docker}
I0127 11:35:49.445249 1740196 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:35:49.506347 1740196 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.506363 1740196 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.506670 1740196 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.506710 1740196 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:35:49.506718 1740196 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.506724 1740196 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.506691 1740196 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:49.507016 1740196 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:49.507031 1740196 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.507046 1740196 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977534 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["re
gistry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube
/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7d0e234a56a0904ec1d49066fcda2353b03a750e0dd5f8517adcf35b8312a925","repoDigests":["localhost/minikube-local-cache-test@sha256:82343623df30f62dfeed8e3494ec3b568ef5739dad3a66c6f6264fe821ed573d"],"repoTags":["localhost/minikube-local-cache-test:functional-977534"],"size":"3330"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":[
"registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-con
troller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd71021
61f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c
6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-977534"],"size":"4943877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977534 image ls --format json --alsologtostderr:
I0127 11:35:49.054540 1740125 out.go:345] Setting OutFile to fd 1 ...
I0127 11:35:49.054674 1740125 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.054709 1740125 out.go:358] Setting ErrFile to fd 2...
I0127 11:35:49.054730 1740125 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.055230 1740125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
I0127 11:35:49.055930 1740125 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.056045 1740125 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.056402 1740125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.056462 1740125 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.072110 1740125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
I0127 11:35:49.072630 1740125 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.073261 1740125 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.073281 1740125 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.073665 1740125 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.073932 1740125 main.go:141] libmachine: (functional-977534) Calling .GetState
I0127 11:35:49.075864 1740125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.075917 1740125 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.096413 1740125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
I0127 11:35:49.096828 1740125 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.097354 1740125 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.097384 1740125 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.097768 1740125 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.097962 1740125 main.go:141] libmachine: (functional-977534) Calling .DriverName
I0127 11:35:49.098140 1740125 ssh_runner.go:195] Run: systemctl --version
I0127 11:35:49.098162 1740125 main.go:141] libmachine: (functional-977534) Calling .GetSSHHostname
I0127 11:35:49.101624 1740125 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.102066 1740125 main.go:141] libmachine: (functional-977534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:1d:0f", ip: ""} in network mk-functional-977534: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:36 +0000 UTC Type:0 Mac:52:54:00:56:1d:0f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-977534 Clientid:01:52:54:00:56:1d:0f}
I0127 11:35:49.102096 1740125 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined IP address 192.168.39.48 and MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.102366 1740125 main.go:141] libmachine: (functional-977534) Calling .GetSSHPort
I0127 11:35:49.102544 1740125 main.go:141] libmachine: (functional-977534) Calling .GetSSHKeyPath
I0127 11:35:49.102680 1740125 main.go:141] libmachine: (functional-977534) Calling .GetSSHUsername
I0127 11:35:49.102820 1740125 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/functional-977534/id_rsa Username:docker}
I0127 11:35:49.189470 1740125 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:35:49.255273 1740125 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.255300 1740125 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.255596 1740125 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:49.255604 1740125 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.255634 1740125 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:35:49.255642 1740125 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.255650 1740125 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.256033 1740125 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.256049 1740125 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977534 image ls --format yaml --alsologtostderr:
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-977534
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 7d0e234a56a0904ec1d49066fcda2353b03a750e0dd5f8517adcf35b8312a925
repoDigests:
- localhost/minikube-local-cache-test@sha256:82343623df30f62dfeed8e3494ec3b568ef5739dad3a66c6f6264fe821ed573d
repoTags:
- localhost/minikube-local-cache-test:functional-977534
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977534 image ls --format yaml --alsologtostderr:
I0127 11:35:49.411377 1740231 out.go:345] Setting OutFile to fd 1 ...
I0127 11:35:49.411497 1740231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.411510 1740231 out.go:358] Setting ErrFile to fd 2...
I0127 11:35:49.411516 1740231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.411702 1740231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
I0127 11:35:49.412671 1740231 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.412907 1740231 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.413860 1740231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.413944 1740231 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.429647 1740231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
I0127 11:35:49.430219 1740231 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.430849 1740231 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.430880 1740231 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.431237 1740231 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.431487 1740231 main.go:141] libmachine: (functional-977534) Calling .GetState
I0127 11:35:49.433225 1740231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.433277 1740231 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.448867 1740231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
I0127 11:35:49.449386 1740231 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.449922 1740231 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.449948 1740231 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.450330 1740231 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.450507 1740231 main.go:141] libmachine: (functional-977534) Calling .DriverName
I0127 11:35:49.450681 1740231 ssh_runner.go:195] Run: systemctl --version
I0127 11:35:49.450711 1740231 main.go:141] libmachine: (functional-977534) Calling .GetSSHHostname
I0127 11:35:49.453505 1740231 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.454178 1740231 main.go:141] libmachine: (functional-977534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:1d:0f", ip: ""} in network mk-functional-977534: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:36 +0000 UTC Type:0 Mac:52:54:00:56:1d:0f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-977534 Clientid:01:52:54:00:56:1d:0f}
I0127 11:35:49.454209 1740231 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined IP address 192.168.39.48 and MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.454397 1740231 main.go:141] libmachine: (functional-977534) Calling .GetSSHPort
I0127 11:35:49.454575 1740231 main.go:141] libmachine: (functional-977534) Calling .GetSSHKeyPath
I0127 11:35:49.454786 1740231 main.go:141] libmachine: (functional-977534) Calling .GetSSHUsername
I0127 11:35:49.454972 1740231 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/functional-977534/id_rsa Username:docker}
I0127 11:35:49.560975 1740231 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:35:49.604442 1740231 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.604459 1740231 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.604696 1740231 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.604714 1740231 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:35:49.604726 1740231 main.go:141] libmachine: Making call to close driver server
I0127 11:35:49.604733 1740231 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:49.606376 1740231 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:49.606379 1740231 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:49.606393 1740231 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh pgrep buildkitd: exit status 1 (201.674401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image build -t localhost/my-image:functional-977534 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 image build -t localhost/my-image:functional-977534 testdata/build --alsologtostderr: (3.659743078s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977534 image build -t localhost/my-image:functional-977534 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 112389dbd35
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-977534
--> c25249a121a
Successfully tagged localhost/my-image:functional-977534
c25249a121ab431e0f3bed252efcfd9240434b4668e01b9f96211f763cc4783e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977534 image build -t localhost/my-image:functional-977534 testdata/build --alsologtostderr:
I0127 11:35:49.767570 1740311 out.go:345] Setting OutFile to fd 1 ...
I0127 11:35:49.767979 1740311 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.768011 1740311 out.go:358] Setting ErrFile to fd 2...
I0127 11:35:49.768044 1740311 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:35:49.768492 1740311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
I0127 11:35:49.769349 1740311 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.770042 1740311 config.go:182] Loaded profile config "functional-977534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:35:49.770593 1740311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.770664 1740311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.785018 1740311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
I0127 11:35:49.785495 1740311 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.786100 1740311 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.786158 1740311 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.786564 1740311 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.786791 1740311 main.go:141] libmachine: (functional-977534) Calling .GetState
I0127 11:35:49.788304 1740311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 11:35:49.788342 1740311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:35:49.802085 1740311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
I0127 11:35:49.802543 1740311 main.go:141] libmachine: () Calling .GetVersion
I0127 11:35:49.803045 1740311 main.go:141] libmachine: Using API Version  1
I0127 11:35:49.803060 1740311 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:35:49.803388 1740311 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:35:49.803542 1740311 main.go:141] libmachine: (functional-977534) Calling .DriverName
I0127 11:35:49.803700 1740311 ssh_runner.go:195] Run: systemctl --version
I0127 11:35:49.803723 1740311 main.go:141] libmachine: (functional-977534) Calling .GetSSHHostname
I0127 11:35:49.806553 1740311 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.807018 1740311 main.go:141] libmachine: (functional-977534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:1d:0f", ip: ""} in network mk-functional-977534: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:36 +0000 UTC Type:0 Mac:52:54:00:56:1d:0f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-977534 Clientid:01:52:54:00:56:1d:0f}
I0127 11:35:49.807043 1740311 main.go:141] libmachine: (functional-977534) DBG | domain functional-977534 has defined IP address 192.168.39.48 and MAC address 52:54:00:56:1d:0f in network mk-functional-977534
I0127 11:35:49.807234 1740311 main.go:141] libmachine: (functional-977534) Calling .GetSSHPort
I0127 11:35:49.807421 1740311 main.go:141] libmachine: (functional-977534) Calling .GetSSHKeyPath
I0127 11:35:49.807540 1740311 main.go:141] libmachine: (functional-977534) Calling .GetSSHUsername
I0127 11:35:49.807659 1740311 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/functional-977534/id_rsa Username:docker}
I0127 11:35:49.917839 1740311 build_images.go:161] Building image from path: /tmp/build.357205791.tar
I0127 11:35:49.917916 1740311 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 11:35:49.927365 1740311 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.357205791.tar
I0127 11:35:49.931287 1740311 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.357205791.tar: stat -c "%s %y" /var/lib/minikube/build/build.357205791.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.357205791.tar': No such file or directory
I0127 11:35:49.931312 1740311 ssh_runner.go:362] scp /tmp/build.357205791.tar --> /var/lib/minikube/build/build.357205791.tar (3072 bytes)
I0127 11:35:49.958594 1740311 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.357205791
I0127 11:35:49.967510 1740311 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.357205791 -xf /var/lib/minikube/build/build.357205791.tar
I0127 11:35:49.979478 1740311 crio.go:315] Building image: /var/lib/minikube/build/build.357205791
I0127 11:35:49.979533 1740311 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-977534 /var/lib/minikube/build/build.357205791 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 11:35:53.344441 1740311 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-977534 /var/lib/minikube/build/build.357205791 --cgroup-manager=cgroupfs: (3.364873399s)
I0127 11:35:53.344518 1740311 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.357205791
I0127 11:35:53.358484 1740311 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.357205791.tar
I0127 11:35:53.368333 1740311 build_images.go:217] Built localhost/my-image:functional-977534 from /tmp/build.357205791.tar
I0127 11:35:53.368364 1740311 build_images.go:133] succeeded building to: functional-977534
I0127 11:35:53.368371 1740311 build_images.go:134] failed building to: 
I0127 11:35:53.368402 1740311 main.go:141] libmachine: Making call to close driver server
I0127 11:35:53.368417 1740311 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:53.368699 1740311 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:53.368721 1740311 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:35:53.368737 1740311 main.go:141] libmachine: Making call to close driver server
I0127 11:35:53.368732 1740311 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:53.368749 1740311 main.go:141] libmachine: (functional-977534) Calling .Close
I0127 11:35:53.369070 1740311 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:35:53.369098 1740311 main.go:141] libmachine: (functional-977534) DBG | Closing plugin on server side
I0127 11:35:53.369102 1740311 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.695967124s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-977534
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image load --daemon kicbase/echo-server:functional-977534 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-977534 image load --daemon kicbase/echo-server:functional-977534 --alsologtostderr: (1.070104354s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image load --daemon kicbase/echo-server:functional-977534 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-977534
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image load --daemon kicbase/echo-server:functional-977534 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image save kicbase/echo-server:functional-977534 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image rm kicbase/echo-server:functional-977534 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-977534
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 image save --daemon kicbase/echo-server:functional-977534 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-977534
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service list -o json
functional_test.go:1494: Took "295.655308ms" to run "out/minikube-linux-amd64 -p functional-977534 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.48:30468
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "311.110194ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.008941ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "400.888512ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "54.96512ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.48:30468
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdany-port1595632470/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737977718585734987" to /tmp/TestFunctionalparallelMountCmdany-port1595632470/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737977718585734987" to /tmp/TestFunctionalparallelMountCmdany-port1595632470/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737977718585734987" to /tmp/TestFunctionalparallelMountCmdany-port1595632470/001/test-1737977718585734987
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.876928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:35:18.853020 1731396 retry.go:31] will retry after 374.841652ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 11:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 11:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 11:35 test-1737977718585734987
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh cat /mount-9p/test-1737977718585734987
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-977534 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [71705d5b-2e73-4211-8dc4-d515a2e308db] Pending
helpers_test.go:344: "busybox-mount" [71705d5b-2e73-4211-8dc4-d515a2e308db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [71705d5b-2e73-4211-8dc4-d515a2e308db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [71705d5b-2e73-4211-8dc4-d515a2e308db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.003424412s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-977534 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdany-port1595632470/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdspecific-port3223978141/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.57837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:35:45.512946 1731396 retry.go:31] will retry after 719.962513ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdspecific-port3223978141/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "sudo umount -f /mount-9p": exit status 1 (277.630899ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-977534 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdspecific-port3223978141/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T" /mount1: exit status 1 (304.397189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:35:47.811460 1731396 retry.go:31] will retry after 740.91523ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977534 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-977534 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567222779/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-977534
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-977534
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-977534
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691084 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 11:36:36.326929 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:37:04.030937 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-691084 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.999093209s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-691084 -- rollout status deployment/busybox: (4.001389239s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-5mw7q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-h2ntw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-hcg95 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-5mw7q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-h2ntw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-hcg95 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-5mw7q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-h2ntw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-hcg95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-5mw7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-5mw7q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-h2ntw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-h2ntw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-hcg95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691084 -- exec busybox-58667487b6-hcg95 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-691084 -v=7 --alsologtostderr
E0127 11:40:07.002663 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.009191 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.020649 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.042053 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.083512 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.165123 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.327194 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:07.648937 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:08.291233 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:09.573568 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:12.135565 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:40:17.257730 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-691084 -v=7 --alsologtostderr: (58.361280413s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-691084 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp testdata/cp-test.txt ha-691084:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test.txt"
E0127 11:40:27.499469 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3363865677/001/cp-test_ha-691084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084:/home/docker/cp-test.txt ha-691084-m02:/home/docker/cp-test_ha-691084_ha-691084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test_ha-691084_ha-691084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084:/home/docker/cp-test.txt ha-691084-m03:/home/docker/cp-test_ha-691084_ha-691084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test_ha-691084_ha-691084-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084:/home/docker/cp-test.txt ha-691084-m04:/home/docker/cp-test_ha-691084_ha-691084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test_ha-691084_ha-691084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp testdata/cp-test.txt ha-691084-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3363865677/001/cp-test_ha-691084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m02:/home/docker/cp-test.txt ha-691084:/home/docker/cp-test_ha-691084-m02_ha-691084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test_ha-691084-m02_ha-691084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m02:/home/docker/cp-test.txt ha-691084-m03:/home/docker/cp-test_ha-691084-m02_ha-691084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test_ha-691084-m02_ha-691084-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m02:/home/docker/cp-test.txt ha-691084-m04:/home/docker/cp-test_ha-691084-m02_ha-691084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test_ha-691084-m02_ha-691084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp testdata/cp-test.txt ha-691084-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3363865677/001/cp-test_ha-691084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m03:/home/docker/cp-test.txt ha-691084:/home/docker/cp-test_ha-691084-m03_ha-691084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test_ha-691084-m03_ha-691084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m03:/home/docker/cp-test.txt ha-691084-m02:/home/docker/cp-test_ha-691084-m03_ha-691084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test_ha-691084-m03_ha-691084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m03:/home/docker/cp-test.txt ha-691084-m04:/home/docker/cp-test_ha-691084-m03_ha-691084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test_ha-691084-m03_ha-691084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp testdata/cp-test.txt ha-691084-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3363865677/001/cp-test_ha-691084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m04:/home/docker/cp-test.txt ha-691084:/home/docker/cp-test_ha-691084-m04_ha-691084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084 "sudo cat /home/docker/cp-test_ha-691084-m04_ha-691084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m04:/home/docker/cp-test.txt ha-691084-m02:/home/docker/cp-test_ha-691084-m04_ha-691084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m02 "sudo cat /home/docker/cp-test_ha-691084-m04_ha-691084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 cp ha-691084-m04:/home/docker/cp-test.txt ha-691084-m03:/home/docker/cp-test_ha-691084-m04_ha-691084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 ssh -n ha-691084-m03 "sudo cat /home/docker/cp-test_ha-691084-m04_ha-691084-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 node stop m02 -v=7 --alsologtostderr
E0127 11:40:47.980850 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:41:28.943007 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:41:36.326639 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-691084 node stop m02 -v=7 --alsologtostderr: (1m30.979618637s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr: exit status 7 (635.526147ms)

                                                
                                                
-- stdout --
	ha-691084
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691084-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-691084-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691084-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:42:10.254730 1745036 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:42:10.255037 1745036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:10.255047 1745036 out.go:358] Setting ErrFile to fd 2...
	I0127 11:42:10.255051 1745036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:10.255245 1745036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:42:10.255435 1745036 out.go:352] Setting JSON to false
	I0127 11:42:10.255466 1745036 mustload.go:65] Loading cluster: ha-691084
	I0127 11:42:10.255589 1745036 notify.go:220] Checking for updates...
	I0127 11:42:10.255884 1745036 config.go:182] Loaded profile config "ha-691084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:42:10.255910 1745036 status.go:174] checking status of ha-691084 ...
	I0127 11:42:10.256289 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.256332 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.277264 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0127 11:42:10.277653 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.278324 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.278351 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.278687 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.278940 1745036 main.go:141] libmachine: (ha-691084) Calling .GetState
	I0127 11:42:10.280504 1745036 status.go:371] ha-691084 host status = "Running" (err=<nil>)
	I0127 11:42:10.280523 1745036 host.go:66] Checking if "ha-691084" exists ...
	I0127 11:42:10.280903 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.280949 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.296809 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0127 11:42:10.297346 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.297809 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.297828 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.298169 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.298353 1745036 main.go:141] libmachine: (ha-691084) Calling .GetIP
	I0127 11:42:10.300926 1745036 main.go:141] libmachine: (ha-691084) DBG | domain ha-691084 has defined MAC address 52:54:00:68:49:7a in network mk-ha-691084
	I0127 11:42:10.301323 1745036 main.go:141] libmachine: (ha-691084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:49:7a", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:36:16 +0000 UTC Type:0 Mac:52:54:00:68:49:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-691084 Clientid:01:52:54:00:68:49:7a}
	I0127 11:42:10.301345 1745036 main.go:141] libmachine: (ha-691084) DBG | domain ha-691084 has defined IP address 192.168.39.229 and MAC address 52:54:00:68:49:7a in network mk-ha-691084
	I0127 11:42:10.301517 1745036 host.go:66] Checking if "ha-691084" exists ...
	I0127 11:42:10.301785 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.301857 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.316666 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0127 11:42:10.317182 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.317818 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.317849 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.318182 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.318363 1745036 main.go:141] libmachine: (ha-691084) Calling .DriverName
	I0127 11:42:10.318546 1745036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:42:10.318579 1745036 main.go:141] libmachine: (ha-691084) Calling .GetSSHHostname
	I0127 11:42:10.321347 1745036 main.go:141] libmachine: (ha-691084) DBG | domain ha-691084 has defined MAC address 52:54:00:68:49:7a in network mk-ha-691084
	I0127 11:42:10.321973 1745036 main.go:141] libmachine: (ha-691084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:49:7a", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:36:16 +0000 UTC Type:0 Mac:52:54:00:68:49:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-691084 Clientid:01:52:54:00:68:49:7a}
	I0127 11:42:10.322025 1745036 main.go:141] libmachine: (ha-691084) DBG | domain ha-691084 has defined IP address 192.168.39.229 and MAC address 52:54:00:68:49:7a in network mk-ha-691084
	I0127 11:42:10.322200 1745036 main.go:141] libmachine: (ha-691084) Calling .GetSSHPort
	I0127 11:42:10.322382 1745036 main.go:141] libmachine: (ha-691084) Calling .GetSSHKeyPath
	I0127 11:42:10.322565 1745036 main.go:141] libmachine: (ha-691084) Calling .GetSSHUsername
	I0127 11:42:10.322701 1745036 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/ha-691084/id_rsa Username:docker}
	I0127 11:42:10.407705 1745036 ssh_runner.go:195] Run: systemctl --version
	I0127 11:42:10.415236 1745036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:42:10.433770 1745036 kubeconfig.go:125] found "ha-691084" server: "https://192.168.39.254:8443"
	I0127 11:42:10.433821 1745036 api_server.go:166] Checking apiserver status ...
	I0127 11:42:10.433861 1745036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:10.448625 1745036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0127 11:42:10.457361 1745036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:42:10.457407 1745036 ssh_runner.go:195] Run: ls
	I0127 11:42:10.461299 1745036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 11:42:10.467546 1745036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 11:42:10.467568 1745036 status.go:463] ha-691084 apiserver status = Running (err=<nil>)
	I0127 11:42:10.467578 1745036 status.go:176] ha-691084 status: &{Name:ha-691084 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:42:10.467614 1745036 status.go:174] checking status of ha-691084-m02 ...
	I0127 11:42:10.467931 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.467983 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.484058 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0127 11:42:10.484552 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.485059 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.485084 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.485492 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.485705 1745036 main.go:141] libmachine: (ha-691084-m02) Calling .GetState
	I0127 11:42:10.487169 1745036 status.go:371] ha-691084-m02 host status = "Stopped" (err=<nil>)
	I0127 11:42:10.487183 1745036 status.go:384] host is not running, skipping remaining checks
	I0127 11:42:10.487190 1745036 status.go:176] ha-691084-m02 status: &{Name:ha-691084-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:42:10.487211 1745036 status.go:174] checking status of ha-691084-m03 ...
	I0127 11:42:10.487502 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.487544 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.504628 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0127 11:42:10.505048 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.505584 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.505604 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.505909 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.506103 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetState
	I0127 11:42:10.507577 1745036 status.go:371] ha-691084-m03 host status = "Running" (err=<nil>)
	I0127 11:42:10.507594 1745036 host.go:66] Checking if "ha-691084-m03" exists ...
	I0127 11:42:10.507935 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.507985 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.522371 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0127 11:42:10.522829 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.523293 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.523311 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.523692 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.523868 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetIP
	I0127 11:42:10.526640 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | domain ha-691084-m03 has defined MAC address 52:54:00:5b:64:f9 in network mk-ha-691084
	I0127 11:42:10.527070 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:64:f9", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:38:15 +0000 UTC Type:0 Mac:52:54:00:5b:64:f9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-691084-m03 Clientid:01:52:54:00:5b:64:f9}
	I0127 11:42:10.527104 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | domain ha-691084-m03 has defined IP address 192.168.39.32 and MAC address 52:54:00:5b:64:f9 in network mk-ha-691084
	I0127 11:42:10.527247 1745036 host.go:66] Checking if "ha-691084-m03" exists ...
	I0127 11:42:10.527640 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.527688 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.542407 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0127 11:42:10.542840 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.543331 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.543357 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.543669 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.543856 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .DriverName
	I0127 11:42:10.544059 1745036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:42:10.544083 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetSSHHostname
	I0127 11:42:10.546442 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | domain ha-691084-m03 has defined MAC address 52:54:00:5b:64:f9 in network mk-ha-691084
	I0127 11:42:10.546878 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:64:f9", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:38:15 +0000 UTC Type:0 Mac:52:54:00:5b:64:f9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-691084-m03 Clientid:01:52:54:00:5b:64:f9}
	I0127 11:42:10.546907 1745036 main.go:141] libmachine: (ha-691084-m03) DBG | domain ha-691084-m03 has defined IP address 192.168.39.32 and MAC address 52:54:00:5b:64:f9 in network mk-ha-691084
	I0127 11:42:10.547040 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetSSHPort
	I0127 11:42:10.547224 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetSSHKeyPath
	I0127 11:42:10.547371 1745036 main.go:141] libmachine: (ha-691084-m03) Calling .GetSSHUsername
	I0127 11:42:10.547523 1745036 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/ha-691084-m03/id_rsa Username:docker}
	I0127 11:42:10.631373 1745036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:42:10.648397 1745036 kubeconfig.go:125] found "ha-691084" server: "https://192.168.39.254:8443"
	I0127 11:42:10.648428 1745036 api_server.go:166] Checking apiserver status ...
	I0127 11:42:10.648460 1745036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:10.663620 1745036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1457/cgroup
	W0127 11:42:10.672524 1745036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1457/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:42:10.672590 1745036 ssh_runner.go:195] Run: ls
	I0127 11:42:10.676779 1745036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 11:42:10.681612 1745036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 11:42:10.681646 1745036 status.go:463] ha-691084-m03 apiserver status = Running (err=<nil>)
	I0127 11:42:10.681655 1745036 status.go:176] ha-691084-m03 status: &{Name:ha-691084-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:42:10.681674 1745036 status.go:174] checking status of ha-691084-m04 ...
	I0127 11:42:10.682074 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.682120 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.697689 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0127 11:42:10.698213 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.698719 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.698759 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.699130 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.699332 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetState
	I0127 11:42:10.700691 1745036 status.go:371] ha-691084-m04 host status = "Running" (err=<nil>)
	I0127 11:42:10.700711 1745036 host.go:66] Checking if "ha-691084-m04" exists ...
	I0127 11:42:10.700993 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.701036 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.715659 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0127 11:42:10.716102 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.716539 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.716559 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.716934 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.717150 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetIP
	I0127 11:42:10.720003 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | domain ha-691084-m04 has defined MAC address 52:54:00:98:d4:cc in network mk-ha-691084
	I0127 11:42:10.720451 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d4:cc", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:39:41 +0000 UTC Type:0 Mac:52:54:00:98:d4:cc Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:ha-691084-m04 Clientid:01:52:54:00:98:d4:cc}
	I0127 11:42:10.720477 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | domain ha-691084-m04 has defined IP address 192.168.39.204 and MAC address 52:54:00:98:d4:cc in network mk-ha-691084
	I0127 11:42:10.720608 1745036 host.go:66] Checking if "ha-691084-m04" exists ...
	I0127 11:42:10.720899 1745036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:10.720934 1745036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:10.735371 1745036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
	I0127 11:42:10.735732 1745036 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:10.736249 1745036 main.go:141] libmachine: Using API Version  1
	I0127 11:42:10.736269 1745036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:10.736574 1745036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:10.736764 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .DriverName
	I0127 11:42:10.736933 1745036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:42:10.736952 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetSSHHostname
	I0127 11:42:10.739549 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | domain ha-691084-m04 has defined MAC address 52:54:00:98:d4:cc in network mk-ha-691084
	I0127 11:42:10.739940 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d4:cc", ip: ""} in network mk-ha-691084: {Iface:virbr1 ExpiryTime:2025-01-27 12:39:41 +0000 UTC Type:0 Mac:52:54:00:98:d4:cc Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:ha-691084-m04 Clientid:01:52:54:00:98:d4:cc}
	I0127 11:42:10.739962 1745036 main.go:141] libmachine: (ha-691084-m04) DBG | domain ha-691084-m04 has defined IP address 192.168.39.204 and MAC address 52:54:00:98:d4:cc in network mk-ha-691084
	I0127 11:42:10.740143 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetSSHPort
	I0127 11:42:10.740295 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetSSHKeyPath
	I0127 11:42:10.740459 1745036 main.go:141] libmachine: (ha-691084-m04) Calling .GetSSHUsername
	I0127 11:42:10.740576 1745036 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/ha-691084-m04/id_rsa Username:docker}
	I0127 11:42:10.822413 1745036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:42:10.838316 1745036 status.go:176] ha-691084-m04 status: &{Name:ha-691084-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 node start m02 -v=7 --alsologtostderr
E0127 11:42:50.864576 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-691084 node start m02 -v=7 --alsologtostderr: (46.653965873s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (424.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-691084 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-691084 -v=7 --alsologtostderr
E0127 11:45:07.002447 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:34.705957 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:46:36.326584 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-691084 -v=7 --alsologtostderr: (4m33.786729195s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691084 --wait=true -v=7 --alsologtostderr
E0127 11:47:59.392581 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-691084 --wait=true -v=7 --alsologtostderr: (2m30.873027677s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-691084
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (424.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 node delete m03 -v=7 --alsologtostderr
E0127 11:50:07.002898 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-691084 node delete m03 -v=7 --alsologtostderr: (17.37913319s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 stop -v=7 --alsologtostderr
E0127 11:51:36.327197 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-691084 stop -v=7 --alsologtostderr: (4m32.78013994s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr: exit status 7 (112.084496ms)

                                                
                                                
-- stdout --
	ha-691084
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-691084-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-691084-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:54:56.185503 1749232 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:54:56.185619 1749232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:56.185630 1749232 out.go:358] Setting ErrFile to fd 2...
	I0127 11:54:56.185634 1749232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:56.185874 1749232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 11:54:56.186075 1749232 out.go:352] Setting JSON to false
	I0127 11:54:56.186103 1749232 mustload.go:65] Loading cluster: ha-691084
	I0127 11:54:56.186207 1749232 notify.go:220] Checking for updates...
	I0127 11:54:56.186525 1749232 config.go:182] Loaded profile config "ha-691084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:54:56.186548 1749232 status.go:174] checking status of ha-691084 ...
	I0127 11:54:56.186970 1749232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:54:56.187016 1749232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:54:56.209085 1749232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0127 11:54:56.209558 1749232 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:54:56.210202 1749232 main.go:141] libmachine: Using API Version  1
	I0127 11:54:56.210234 1749232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:54:56.210576 1749232 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:54:56.210774 1749232 main.go:141] libmachine: (ha-691084) Calling .GetState
	I0127 11:54:56.212435 1749232 status.go:371] ha-691084 host status = "Stopped" (err=<nil>)
	I0127 11:54:56.212449 1749232 status.go:384] host is not running, skipping remaining checks
	I0127 11:54:56.212455 1749232 status.go:176] ha-691084 status: &{Name:ha-691084 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:54:56.212472 1749232 status.go:174] checking status of ha-691084-m02 ...
	I0127 11:54:56.212757 1749232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:54:56.212803 1749232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:54:56.227487 1749232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0127 11:54:56.227864 1749232 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:54:56.228284 1749232 main.go:141] libmachine: Using API Version  1
	I0127 11:54:56.228308 1749232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:54:56.228615 1749232 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:54:56.228815 1749232 main.go:141] libmachine: (ha-691084-m02) Calling .GetState
	I0127 11:54:56.230209 1749232 status.go:371] ha-691084-m02 host status = "Stopped" (err=<nil>)
	I0127 11:54:56.230222 1749232 status.go:384] host is not running, skipping remaining checks
	I0127 11:54:56.230229 1749232 status.go:176] ha-691084-m02 status: &{Name:ha-691084-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:54:56.230249 1749232 status.go:174] checking status of ha-691084-m04 ...
	I0127 11:54:56.230543 1749232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:54:56.230589 1749232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:54:56.244767 1749232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0127 11:54:56.245073 1749232 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:54:56.245522 1749232 main.go:141] libmachine: Using API Version  1
	I0127 11:54:56.245542 1749232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:54:56.245832 1749232 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:54:56.246049 1749232 main.go:141] libmachine: (ha-691084-m04) Calling .GetState
	I0127 11:54:56.247400 1749232 status.go:371] ha-691084-m04 host status = "Stopped" (err=<nil>)
	I0127 11:54:56.247417 1749232 status.go:384] host is not running, skipping remaining checks
	I0127 11:54:56.247424 1749232 status.go:176] ha-691084-m04 status: &{Name:ha-691084-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691084 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 11:55:07.001945 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:56:30.067718 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:56:36.327174 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-691084 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.366728676s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-691084 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-691084 --control-plane -v=7 --alsologtostderr: (1m16.540947529s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-691084 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-555984 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-555984 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.966811224s)
--- PASS: TestJSONOutput/start/Command (77.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-555984 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-555984 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-555984 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-555984 --output=json --user=testUser: (7.338319333s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-222108 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-222108 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.859199ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d350663b-98f2-4c8a-8fe0-9ebd16624ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-222108] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eff4618d-45b8-421c-b4c5-c96b54e78ef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20318"}}
	{"specversion":"1.0","id":"71220c29-d2c7-4244-bbe3-2020319c6d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6e29fe03-fa9e-44ee-a019-f2268bf0b2e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig"}}
	{"specversion":"1.0","id":"377c70b4-b45c-49b8-b2b9-d2a7bd7f68b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube"}}
	{"specversion":"1.0","id":"359d7617-6374-4739-b152-f7ad00442538","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ff53a7d8-a00b-41d5-a50e-839d1f9b771b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d8de00c5-5b04-4003-9e2e-0803233ad1bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-222108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-222108
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-011664 --driver=kvm2  --container-runtime=crio
E0127 12:00:07.002117 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-011664 --driver=kvm2  --container-runtime=crio: (43.47677491s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-024321 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-024321 --driver=kvm2  --container-runtime=crio: (43.595371861s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-011664
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-024321
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-024321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-024321
helpers_test.go:175: Cleaning up "first-011664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-011664
--- PASS: TestMinikubeProfile (90.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-601435 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-601435 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.583789797s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-601435 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-601435 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-621953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0127 12:01:36.331250 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-621953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.159644134s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-601435 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-621953
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-621953: (1.269960457s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-621953
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-621953: (20.472541471s)
--- PASS: TestMountStart/serial/RestartStopped (21.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.869644923s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-589982 -- rollout status deployment/busybox: (4.186868787s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-gp2qm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-vjh5c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-gp2qm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-vjh5c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-gp2qm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-vjh5c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-gp2qm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-gp2qm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-vjh5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589982 -- exec busybox-58667487b6-vjh5c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-589982 -v 3 --alsologtostderr
E0127 12:04:39.395025 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-589982 -v 3 --alsologtostderr: (46.204674641s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-589982 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp testdata/cp-test.txt multinode-589982:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3297566034/001/cp-test_multinode-589982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982:/home/docker/cp-test.txt multinode-589982-m02:/home/docker/cp-test_multinode-589982_multinode-589982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test_multinode-589982_multinode-589982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982:/home/docker/cp-test.txt multinode-589982-m03:/home/docker/cp-test_multinode-589982_multinode-589982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test_multinode-589982_multinode-589982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp testdata/cp-test.txt multinode-589982-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3297566034/001/cp-test_multinode-589982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m02:/home/docker/cp-test.txt multinode-589982:/home/docker/cp-test_multinode-589982-m02_multinode-589982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test_multinode-589982-m02_multinode-589982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m02:/home/docker/cp-test.txt multinode-589982-m03:/home/docker/cp-test_multinode-589982-m02_multinode-589982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test_multinode-589982-m02_multinode-589982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp testdata/cp-test.txt multinode-589982-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3297566034/001/cp-test_multinode-589982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m03:/home/docker/cp-test.txt multinode-589982:/home/docker/cp-test_multinode-589982-m03_multinode-589982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982 "sudo cat /home/docker/cp-test_multinode-589982-m03_multinode-589982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 cp multinode-589982-m03:/home/docker/cp-test.txt multinode-589982-m02:/home/docker/cp-test_multinode-589982-m03_multinode-589982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 ssh -n multinode-589982-m02 "sudo cat /home/docker/cp-test_multinode-589982-m03_multinode-589982-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 node stop m03
E0127 12:05:07.001897 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-589982 node stop m03: (1.43738924s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589982 status: exit status 7 (415.638217ms)

                                                
                                                
-- stdout --
	multinode-589982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr: exit status 7 (408.117548ms)

                                                
                                                
-- stdout --
	multinode-589982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:05:07.988216 1756941 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:05:07.988358 1756941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:05:07.988370 1756941 out.go:358] Setting ErrFile to fd 2...
	I0127 12:05:07.988376 1756941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:05:07.988540 1756941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:05:07.988746 1756941 out.go:352] Setting JSON to false
	I0127 12:05:07.988786 1756941 mustload.go:65] Loading cluster: multinode-589982
	I0127 12:05:07.988877 1756941 notify.go:220] Checking for updates...
	I0127 12:05:07.989316 1756941 config.go:182] Loaded profile config "multinode-589982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:05:07.989343 1756941 status.go:174] checking status of multinode-589982 ...
	I0127 12:05:07.989753 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:07.989799 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.005308 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0127 12:05:08.005763 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.006420 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.006454 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.006788 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.007014 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetState
	I0127 12:05:08.008692 1756941 status.go:371] multinode-589982 host status = "Running" (err=<nil>)
	I0127 12:05:08.008707 1756941 host.go:66] Checking if "multinode-589982" exists ...
	I0127 12:05:08.008996 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.009040 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.024788 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I0127 12:05:08.025258 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.025696 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.025715 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.026011 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.026188 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetIP
	I0127 12:05:08.028768 1756941 main.go:141] libmachine: (multinode-589982) DBG | domain multinode-589982 has defined MAC address 52:54:00:19:33:63 in network mk-multinode-589982
	I0127 12:05:08.029247 1756941 main.go:141] libmachine: (multinode-589982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:33:63", ip: ""} in network mk-multinode-589982: {Iface:virbr1 ExpiryTime:2025-01-27 13:02:32 +0000 UTC Type:0 Mac:52:54:00:19:33:63 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-589982 Clientid:01:52:54:00:19:33:63}
	I0127 12:05:08.029281 1756941 main.go:141] libmachine: (multinode-589982) DBG | domain multinode-589982 has defined IP address 192.168.39.169 and MAC address 52:54:00:19:33:63 in network mk-multinode-589982
	I0127 12:05:08.029374 1756941 host.go:66] Checking if "multinode-589982" exists ...
	I0127 12:05:08.029732 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.029780 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.044302 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0127 12:05:08.044691 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.045178 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.045201 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.045461 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.045630 1756941 main.go:141] libmachine: (multinode-589982) Calling .DriverName
	I0127 12:05:08.045795 1756941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:05:08.045816 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetSSHHostname
	I0127 12:05:08.048327 1756941 main.go:141] libmachine: (multinode-589982) DBG | domain multinode-589982 has defined MAC address 52:54:00:19:33:63 in network mk-multinode-589982
	I0127 12:05:08.048725 1756941 main.go:141] libmachine: (multinode-589982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:33:63", ip: ""} in network mk-multinode-589982: {Iface:virbr1 ExpiryTime:2025-01-27 13:02:32 +0000 UTC Type:0 Mac:52:54:00:19:33:63 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-589982 Clientid:01:52:54:00:19:33:63}
	I0127 12:05:08.048760 1756941 main.go:141] libmachine: (multinode-589982) DBG | domain multinode-589982 has defined IP address 192.168.39.169 and MAC address 52:54:00:19:33:63 in network mk-multinode-589982
	I0127 12:05:08.048892 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetSSHPort
	I0127 12:05:08.049049 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetSSHKeyPath
	I0127 12:05:08.049184 1756941 main.go:141] libmachine: (multinode-589982) Calling .GetSSHUsername
	I0127 12:05:08.049307 1756941 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/multinode-589982/id_rsa Username:docker}
	I0127 12:05:08.133432 1756941 ssh_runner.go:195] Run: systemctl --version
	I0127 12:05:08.139322 1756941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:05:08.153458 1756941 kubeconfig.go:125] found "multinode-589982" server: "https://192.168.39.169:8443"
	I0127 12:05:08.153492 1756941 api_server.go:166] Checking apiserver status ...
	I0127 12:05:08.153526 1756941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:05:08.167411 1756941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0127 12:05:08.176240 1756941 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:05:08.176287 1756941 ssh_runner.go:195] Run: ls
	I0127 12:05:08.180197 1756941 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0127 12:05:08.184362 1756941 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0127 12:05:08.184388 1756941 status.go:463] multinode-589982 apiserver status = Running (err=<nil>)
	I0127 12:05:08.184400 1756941 status.go:176] multinode-589982 status: &{Name:multinode-589982 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:05:08.184432 1756941 status.go:174] checking status of multinode-589982-m02 ...
	I0127 12:05:08.184741 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.184780 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.200403 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I0127 12:05:08.200777 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.201229 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.201253 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.201558 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.201737 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetState
	I0127 12:05:08.203234 1756941 status.go:371] multinode-589982-m02 host status = "Running" (err=<nil>)
	I0127 12:05:08.203255 1756941 host.go:66] Checking if "multinode-589982-m02" exists ...
	I0127 12:05:08.203551 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.203587 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.218261 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I0127 12:05:08.218673 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.219149 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.219180 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.219498 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.219663 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetIP
	I0127 12:05:08.222064 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | domain multinode-589982-m02 has defined MAC address 52:54:00:c0:c8:07 in network mk-multinode-589982
	I0127 12:05:08.222444 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:c8:07", ip: ""} in network mk-multinode-589982: {Iface:virbr1 ExpiryTime:2025-01-27 13:03:32 +0000 UTC Type:0 Mac:52:54:00:c0:c8:07 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-589982-m02 Clientid:01:52:54:00:c0:c8:07}
	I0127 12:05:08.222465 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | domain multinode-589982-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c0:c8:07 in network mk-multinode-589982
	I0127 12:05:08.222575 1756941 host.go:66] Checking if "multinode-589982-m02" exists ...
	I0127 12:05:08.222907 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.222963 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.237473 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0127 12:05:08.237850 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.238323 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.238349 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.238627 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.238838 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .DriverName
	I0127 12:05:08.239050 1756941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:05:08.239078 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetSSHHostname
	I0127 12:05:08.241645 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | domain multinode-589982-m02 has defined MAC address 52:54:00:c0:c8:07 in network mk-multinode-589982
	I0127 12:05:08.242051 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:c8:07", ip: ""} in network mk-multinode-589982: {Iface:virbr1 ExpiryTime:2025-01-27 13:03:32 +0000 UTC Type:0 Mac:52:54:00:c0:c8:07 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-589982-m02 Clientid:01:52:54:00:c0:c8:07}
	I0127 12:05:08.242083 1756941 main.go:141] libmachine: (multinode-589982-m02) DBG | domain multinode-589982-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c0:c8:07 in network mk-multinode-589982
	I0127 12:05:08.242215 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetSSHPort
	I0127 12:05:08.242396 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetSSHKeyPath
	I0127 12:05:08.242550 1756941 main.go:141] libmachine: (multinode-589982-m02) Calling .GetSSHUsername
	I0127 12:05:08.242704 1756941 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-1724227/.minikube/machines/multinode-589982-m02/id_rsa Username:docker}
	I0127 12:05:08.317391 1756941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:05:08.329954 1756941 status.go:176] multinode-589982-m02 status: &{Name:multinode-589982-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:05:08.329993 1756941 status.go:174] checking status of multinode-589982-m03 ...
	I0127 12:05:08.330341 1756941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:05:08.330383 1756941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:05:08.345766 1756941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0127 12:05:08.346235 1756941 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:05:08.346785 1756941 main.go:141] libmachine: Using API Version  1
	I0127 12:05:08.346818 1756941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:05:08.347147 1756941 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:05:08.347336 1756941 main.go:141] libmachine: (multinode-589982-m03) Calling .GetState
	I0127 12:05:08.348755 1756941 status.go:371] multinode-589982-m03 host status = "Stopped" (err=<nil>)
	I0127 12:05:08.348772 1756941 status.go:384] host is not running, skipping remaining checks
	I0127 12:05:08.348779 1756941 status.go:176] multinode-589982-m03 status: &{Name:multinode-589982-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-589982 node start m03 -v=7 --alsologtostderr: (37.350025903s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589982
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-589982
E0127 12:06:36.335323 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-589982: (3m3.019375664s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589982 --wait=true -v=8 --alsologtostderr
E0127 12:10:07.002494 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589982 --wait=true -v=8 --alsologtostderr: (2m24.968659694s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589982
--- PASS: TestMultiNode/serial/RestartKeepsNodes (328.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-589982 node delete m03: (2.143538939s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 stop
E0127 12:11:36.335142 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:10.071918 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-589982 stop: (3m1.660991162s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589982 status: exit status 7 (86.096061ms)

                                                
                                                
-- stdout --
	multinode-589982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589982-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr: exit status 7 (85.670935ms)

                                                
                                                
-- stdout --
	multinode-589982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589982-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:14:18.885986 1759853 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:14:18.886286 1759853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:14:18.886297 1759853 out.go:358] Setting ErrFile to fd 2...
	I0127 12:14:18.886302 1759853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:14:18.886542 1759853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:14:18.886776 1759853 out.go:352] Setting JSON to false
	I0127 12:14:18.886808 1759853 mustload.go:65] Loading cluster: multinode-589982
	I0127 12:14:18.886906 1759853 notify.go:220] Checking for updates...
	I0127 12:14:18.887308 1759853 config.go:182] Loaded profile config "multinode-589982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:14:18.887331 1759853 status.go:174] checking status of multinode-589982 ...
	I0127 12:14:18.887745 1759853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:14:18.887787 1759853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:14:18.902598 1759853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0127 12:14:18.903040 1759853 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:14:18.903577 1759853 main.go:141] libmachine: Using API Version  1
	I0127 12:14:18.903596 1759853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:14:18.903910 1759853 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:14:18.904101 1759853 main.go:141] libmachine: (multinode-589982) Calling .GetState
	I0127 12:14:18.905733 1759853 status.go:371] multinode-589982 host status = "Stopped" (err=<nil>)
	I0127 12:14:18.905759 1759853 status.go:384] host is not running, skipping remaining checks
	I0127 12:14:18.905767 1759853 status.go:176] multinode-589982 status: &{Name:multinode-589982 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:14:18.905806 1759853 status.go:174] checking status of multinode-589982-m02 ...
	I0127 12:14:18.906089 1759853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:14:18.906143 1759853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:14:18.920611 1759853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38071
	I0127 12:14:18.921021 1759853 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:14:18.921444 1759853 main.go:141] libmachine: Using API Version  1
	I0127 12:14:18.921464 1759853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:14:18.921740 1759853 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:14:18.921919 1759853 main.go:141] libmachine: (multinode-589982-m02) Calling .GetState
	I0127 12:14:18.923237 1759853 status.go:371] multinode-589982-m02 host status = "Stopped" (err=<nil>)
	I0127 12:14:18.923250 1759853 status.go:384] host is not running, skipping remaining checks
	I0127 12:14:18.923255 1759853 status.go:176] multinode-589982-m02 status: &{Name:multinode-589982-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589982 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 12:15:07.002192 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589982 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.424854599s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589982 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589982
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589982-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-589982-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.679641ms)

                                                
                                                
-- stdout --
	* [multinode-589982-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-589982-m02' is duplicated with machine name 'multinode-589982-m02' in profile 'multinode-589982'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589982-m03 --driver=kvm2  --container-runtime=crio
E0127 12:16:36.330932 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589982-m03 --driver=kvm2  --container-runtime=crio: (40.759693103s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-589982
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-589982: exit status 80 (212.681216ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-589982 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-589982-m03 already exists in multinode-589982-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-589982-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.84s)

                                                
                                    
x
+
TestScheduledStopUnix (116.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-375791 --memory=2048 --driver=kvm2  --container-runtime=crio
E0127 12:20:07.001997 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-375791 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.284079976s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375791 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-375791 -n scheduled-stop-375791
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 12:20:31.620933 1731396 retry.go:31] will retry after 79.508µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.622095 1731396 retry.go:31] will retry after 187.078µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.623256 1731396 retry.go:31] will retry after 272.801µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.624387 1731396 retry.go:31] will retry after 269.684µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.625517 1731396 retry.go:31] will retry after 645.406µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.626631 1731396 retry.go:31] will retry after 528.401µs: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.627743 1731396 retry.go:31] will retry after 1.466044ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.629942 1731396 retry.go:31] will retry after 1.401836ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.632196 1731396 retry.go:31] will retry after 3.231196ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.636418 1731396 retry.go:31] will retry after 2.994918ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.639728 1731396 retry.go:31] will retry after 8.221962ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.648949 1731396 retry.go:31] will retry after 12.156868ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.662171 1731396 retry.go:31] will retry after 17.671913ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.680386 1731396 retry.go:31] will retry after 14.660275ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
I0127 12:20:31.695614 1731396 retry.go:31] will retry after 37.23899ms: open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/scheduled-stop-375791/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375791 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375791 -n scheduled-stop-375791
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375791
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0127 12:21:19.398092 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:36.334709 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375791
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-375791: exit status 7 (69.99271ms)

                                                
                                                
-- stdout --
	scheduled-stop-375791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375791 -n scheduled-stop-375791
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375791 -n scheduled-stop-375791: exit status 7 (64.057508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-375791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-375791
--- PASS: TestScheduledStopUnix (116.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (214.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3495864616 start -p running-upgrade-385378 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3495864616 start -p running-upgrade-385378 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m57.8372345s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-385378 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-385378 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.187327512s)
helpers_test.go:175: Cleaning up "running-upgrade-385378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-385378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-385378: (1.239002974s)
--- PASS: TestRunningBinaryUpgrade (214.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.483355ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-270668] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-270668 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-270668 --driver=kvm2  --container-runtime=crio: (1m30.522186333s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-270668 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3562645553 start -p stopped-upgrade-010618 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3562645553 start -p stopped-upgrade-010618 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m24.88165183s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3562645553 -p stopped-upgrade-010618 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3562645553 -p stopped-upgrade-010618 stop: (1.380784291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-010618 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-010618 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.562083075s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (59.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.874741632s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-270668 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-270668 status -o json: exit status 2 (254.535613ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-270668","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-270668
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-270668: (1.414528767s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (59.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-270668 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.983068903s)
--- PASS: TestNoKubernetes/serial/Start (28.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-270668 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-270668 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.513617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.819175451s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0127 12:25:07.002180 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.069226927s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-270668
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-270668: (1.308593256s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-270668 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-270668 --driver=kvm2  --container-runtime=crio: (22.418769398s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-010618
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-270668 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-270668 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.985238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-956477 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-956477 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (110.845139ms)

                                                
                                                
-- stdout --
	* [false-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:25:40.284123 1768325 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:25:40.284277 1768325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:25:40.284291 1768325 out.go:358] Setting ErrFile to fd 2...
	I0127 12:25:40.284298 1768325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:25:40.284575 1768325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-1724227/.minikube/bin
	I0127 12:25:40.285422 1768325 out.go:352] Setting JSON to false
	I0127 12:25:40.286899 1768325 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":32881,"bootTime":1737947859,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:25:40.287046 1768325 start.go:139] virtualization: kvm guest
	I0127 12:25:40.288944 1768325 out.go:177] * [false-956477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:25:40.290244 1768325 notify.go:220] Checking for updates...
	I0127 12:25:40.290275 1768325 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:25:40.291423 1768325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:25:40.292610 1768325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-1724227/kubeconfig
	I0127 12:25:40.293655 1768325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-1724227/.minikube
	I0127 12:25:40.294683 1768325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:25:40.295707 1768325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:25:40.297289 1768325 config.go:182] Loaded profile config "cert-expiration-103712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:25:40.297454 1768325 config.go:182] Loaded profile config "force-systemd-flag-980891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:25:40.297584 1768325 config.go:182] Loaded profile config "kubernetes-upgrade-029294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 12:25:40.297704 1768325 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:25:40.332724 1768325 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:25:40.333668 1768325 start.go:297] selected driver: kvm2
	I0127 12:25:40.333684 1768325 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:25:40.333702 1768325 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:25:40.335634 1768325 out.go:201] 
	W0127 12:25:40.336732 1768325 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 12:25:40.337742 1768325 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-956477 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-956477

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-956477"

                                                
                                                
----------------------- debugLogs end: false-956477 [took: 2.845688915s] --------------------------------
helpers_test.go:175: Cleaning up "false-956477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-956477
--- PASS: TestNetworkPlugins/group/false (3.10s)

                                                
                                    
x
+
TestPause/serial/Start (82.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-502641 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-502641 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m22.645991657s)
--- PASS: TestPause/serial/Start (82.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-472479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-472479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m33.497538129s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-798169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-798169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m0.750224618s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-798169 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4784635-46bc-43a1-a049-ae2923c0d06f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4784635-46bc-43a1-a049-ae2923c0d06f] Running
E0127 12:30:07.002236 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/functional-977534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003811675s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-798169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-485564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-485564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (57.913879533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-798169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-798169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-798169 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-798169 --alsologtostderr -v=3: (1m30.997012206s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-472479 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [471e55f9-1e25-4998-a8c1-9f3c737865b3] Pending
helpers_test.go:344: "busybox" [471e55f9-1e25-4998-a8c1-9f3c737865b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [471e55f9-1e25-4998-a8c1-9f3c737865b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004706803s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-472479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-472479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-472479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-472479 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-472479 --alsologtostderr -v=3: (1m30.987647468s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-485564 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb5ad9e8-ca42-4e79-a13f-128de7e0a61b] Pending
helpers_test.go:344: "busybox" [bb5ad9e8-ca42-4e79-a13f-128de7e0a61b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb5ad9e8-ca42-4e79-a13f-128de7e0a61b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004290267s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-485564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-485564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-485564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-485564 --alsologtostderr -v=3
E0127 12:31:36.327186 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-485564 --alsologtostderr -v=3: (1m31.035958861s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-798169 -n embed-certs-798169
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-798169 -n embed-certs-798169: exit status 7 (66.443471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-798169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (295.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-798169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-798169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (4m55.689167282s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-798169 -n embed-certs-798169
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (295.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472479 -n no-preload-472479
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472479 -n no-preload-472479: exit status 7 (89.147234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-472479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485564 -n default-k8s-diff-port-485564
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485564 -n default-k8s-diff-port-485564: exit status 7 (76.603714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-485564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-488586 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-488586 --alsologtostderr -v=3: (3.576082414s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488586 -n old-k8s-version-488586: exit status 7 (67.925433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-488586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-65qvh" [466a250d-639f-451d-a8d1-37310c4b1aff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004318325s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-65qvh" [466a250d-639f-451d-a8d1-37310c4b1aff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004168396s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-798169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-798169 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-798169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-798169 -n embed-certs-798169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-798169 -n embed-certs-798169: exit status 2 (240.321436ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-798169 -n embed-certs-798169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-798169 -n embed-certs-798169: exit status 2 (249.457545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-798169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-798169 -n embed-certs-798169
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-798169 -n embed-certs-798169
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-947992 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-947992 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (46.131625402s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-947992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-947992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.492542233s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-947992 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-947992 --alsologtostderr -v=3: (10.327484645s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-947992 -n newest-cni-947992
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-947992 -n newest-cni-947992: exit status 7 (67.05293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-947992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-947992 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:37:59.399481 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/addons-010792/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-947992 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.020581847s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-947992 -n newest-cni-947992
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-947992 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-947992 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-947992 -n newest-cni-947992
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-947992 -n newest-cni-947992: exit status 2 (230.129483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-947992 -n newest-cni-947992
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-947992 -n newest-cni-947992: exit status 2 (236.165682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-947992 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-947992 -n newest-cni-947992
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-947992 -n newest-cni-947992
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m21.806026569s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-956477 "pgrep -a kubelet"
I0127 12:39:54.764488 1731396 config.go:182] Loaded profile config "auto-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m574f" [27d7d802-cfac-4104-99b6-3eccfafb9e61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m574f" [27d7d802-cfac-4104-99b6-3eccfafb9e61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004635536s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (58.164286054s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tnj69" [75b8b435-44c0-4852-9ad2-33cb8e547445] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00491532s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-956477 "pgrep -a kubelet"
I0127 12:41:25.327919 1731396 config.go:182] Loaded profile config "kindnet-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8nmpw" [49b07b32-12b9-4f1c-baa2-da1defc75c6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8nmpw" [49b07b32-12b9-4f1c-baa2-da1defc75c6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004047161s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.599151887s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vjrgg" [55adf878-565a-4ad1-9f4a-b9376fec3d94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004876591s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-956477 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pclbk" [4fdf5974-e723-4df5-80b6-1d9d220bce73] Pending
helpers_test.go:344: "netcat-5d86dc444-pclbk" [4fdf5974-e723-4df5-80b6-1d9d220bce73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pclbk" [4fdf5974-e723-4df5-80b6-1d9d220bce73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00326121s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m8.458087697s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-956477 "pgrep -a kubelet"
E0127 12:44:54.972597 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:54.978986 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
I0127 12:44:54.989229 1731396 config.go:182] Loaded profile config "custom-flannel-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-956477 replace --force -f testdata/netcat-deployment.yaml
E0127 12:44:54.990570 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:55.011997 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:55.053436 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:55.135565 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xb72k" [b3eba7a3-76a6-44bd-90f5-08dee020eeb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 12:44:55.297844 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:55.619499 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-xb72k" [b3eba7a3-76a6-44bd-90f5-08dee020eeb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004288494s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-956477 exec deployment/netcat -- nslookup kubernetes.default
E0127 12:45:05.226648 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/auto-956477/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m16.486172585s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-956477 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4sh2l" [3bb67972-2336-4695-857e-66680a07649a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4sh2l" [3bb67972-2336-4695-857e-66680a07649a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003806425s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.702049721s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dw84b" [22079b87-ad50-40c7-8b3c-c3f5e6b980bd] Running
E0127 12:48:12.853328 1731396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-1724227/.minikube/profiles/calico-956477/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004052525s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-956477 "pgrep -a kubelet"
I0127 12:48:18.876885 1731396 config.go:182] Loaded profile config "flannel-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sclw2" [eda0cb83-9222-4e60-99b2-d85abb215bb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sclw2" [eda0cb83-9222-4e60-99b2-d85abb215bb0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003436368s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-956477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (52.623218307s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-956477 "pgrep -a kubelet"
I0127 12:49:37.860081 1731396 config.go:182] Loaded profile config "bridge-956477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-956477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zrwlq" [58fb3795-5cb8-44fc-a2fc-0b58ff851cdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zrwlq" [58fb3795-5cb8-44fc-a2fc-0b58ff851cdd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003105983s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-956477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-956477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (39/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
268 TestStartStop/group/disable-driver-mounts 0.14
274 TestNetworkPlugins/group/kubenet 3.02
282 TestNetworkPlugins/group/cilium 3.52
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-010792 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-620207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-620207
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-956477 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-956477

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-956477"

                                                
                                                
----------------------- debugLogs end: kubenet-956477 [took: 2.873351555s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-956477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-956477
--- SKIP: TestNetworkPlugins/group/kubenet (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-956477 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-956477" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-956477

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-956477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-956477"

                                                
                                                
----------------------- debugLogs end: cilium-956477 [took: 3.366027854s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-956477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-956477
--- SKIP: TestNetworkPlugins/group/cilium (3.52s)

                                                
                                    
Copied to clipboard